All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-20 23:25 ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-20 23:25 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: kernel-hardening, Kees Cook, Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
work in linux-next, which forbids copies from and to slab objects
unless they are from kmalloc or explicitly whitelisted, breaks KVM
completely.

This series fixes it by adding the two new usercopy arguments
to kvm_init (more precisely to a new function kvm_init_usercopy,
while kvm_init passes zeroes as a default).

There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
obsolete and not a big deal at all.

I'm Ccing all submaintainers in case they have something similar
going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
pretty complex userspace API, so thorough with linux-next is highly
recommended.

Many thanks to Thomas Gleixner for reporting this to me.

Paolo

Paolo Bonzini (2):
  KVM: allow setting a usercopy region in struct kvm_vcpu
  KVM: fix KVM_XEN_HVM_CONFIG ioctl

 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/svm.c              |  4 ++--
 arch/x86/kvm/vmx.c              |  4 ++--
 arch/x86/kvm/x86.c              | 17 ++++++++++++++---
 include/linux/kvm_host.h        | 13 +++++++++++--
 virt/kvm/kvm_main.c             | 13 ++++++++-----
 6 files changed, 40 insertions(+), 14 deletions(-)

-- 
2.14.2

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-20 23:25 ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-20 23:25 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: kernel-hardening, Kees Cook, Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
work in linux-next, which forbids copies from and to slab objects
unless they are from kmalloc or explicitly whitelisted, breaks KVM
completely.

This series fixes it by adding the two new usercopy arguments
to kvm_init (more precisely to a new function kvm_init_usercopy,
while kvm_init passes zeroes as a default).

There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
obsolete and not a big deal at all.

I'm Ccing all submaintainers in case they have something similar
going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
pretty complex userspace API, so thorough with linux-next is highly
recommended.

Many thanks to Thomas Gleixner for reporting this to me.

Paolo

Paolo Bonzini (2):
  KVM: allow setting a usercopy region in struct kvm_vcpu
  KVM: fix KVM_XEN_HVM_CONFIG ioctl

 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/svm.c              |  4 ++--
 arch/x86/kvm/vmx.c              |  4 ++--
 arch/x86/kvm/x86.c              | 17 ++++++++++++++---
 include/linux/kvm_host.h        | 13 +++++++++++--
 virt/kvm/kvm_main.c             | 13 ++++++++-----
 6 files changed, 40 insertions(+), 14 deletions(-)

-- 
2.14.2

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 1/2] KVM: allow setting a usercopy region in struct kvm_vcpu
  2017-10-20 23:25 ` [kernel-hardening] " Paolo Bonzini
@ 2017-10-20 23:25   ` Paolo Bonzini
  -1 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-20 23:25 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Thomas Gleixner, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

On x86, struct kvm_vcpu has a usercopy region corresponding to the CPUID
entries.  The area is read and written by the KVM_GET/SET_CPUID2 ioctls.
Without this patch, KVM is completely broken on x86 with usercopy
hardening enabled.

Define kvm_init in terms of a more generic function that allows setting
a usercopy region.  Because x86 has separate kvm_init callers for Intel and
AMD, another variant called kvm_init_x86 passes the region corresponding
to the cpuid_entries array.

Reported-by: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
	The patch is on top of linux-next.

 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/svm.c              |  4 ++--
 arch/x86/kvm/vmx.c              |  4 ++--
 arch/x86/kvm/x86.c              | 10 ++++++++++
 include/linux/kvm_host.h        | 13 +++++++++++--
 virt/kvm/kvm_main.c             | 13 ++++++++-----
 6 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6b8f937ca398..bb8243d413d0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1420,6 +1420,9 @@ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
+int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
+	         unsigned vcpu_align, struct module *module);
+
 static inline int kvm_cpu_get_apicid(int mps_cpu)
 {
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ff94552f85d0..457433c3a703 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5594,8 +5594,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 static int __init svm_init(void)
 {
-	return kvm_init(&svm_x86_ops, sizeof(struct vcpu_svm),
-			__alignof__(struct vcpu_svm), THIS_MODULE);
+	return kvm_init_x86(&svm_x86_ops, sizeof(struct vcpu_svm),
+			    __alignof__(struct vcpu_svm), THIS_MODULE);
 }
 
 static void __exit svm_exit(void)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c460b0b439d3..6e78530df6a8 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -12106,8 +12106,8 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 static int __init vmx_init(void)
 {
-	int r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
-                     __alignof__(struct vcpu_vmx), THIS_MODULE);
+	int r = kvm_init_x86(&vmx_x86_ops, sizeof(struct vcpu_vmx),
+			     __alignof__(struct vcpu_vmx), THIS_MODULE);
 	if (r)
 		return r;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5669af09b732..415529a78c37 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8181,6 +8181,16 @@ void kvm_arch_sync_events(struct kvm *kvm)
 	kvm_free_pit(kvm);
 }
 
+int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
+		 unsigned vcpu_align, struct module *module)
+{
+	return kvm_init_usercopy(kvm_x86_ops, vcpu_size, vcpu_align,
+				 offsetof(struct kvm_vcpu_arch, cpuid_entries),
+				 sizeof_field(struct kvm_vcpu_arch, cpuid_entries),
+				 module);
+}
+EXPORT_SYMBOL_GPL(kvm_init_x86);
+
 int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 {
 	int i, r;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6882538eda32..21e19658b086 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -561,8 +561,17 @@ static inline void kvm_irqfd_exit(void)
 {
 }
 #endif
-int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
-		  struct module *module);
+
+int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+		      unsigned vcpu_usercopy_start, unsigned vcpu_usercopy_size,
+		      struct module *module);
+
+static inline int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+			   struct module *module)
+{
+	return kvm_init_usercopy(opaque, vcpu_size, vcpu_align, 0, 0, module);
+}
+
 void kvm_exit(void);
 
 void kvm_get_kvm(struct kvm *kvm);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 261c782a688f..ac889b28bb54 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3959,8 +3959,9 @@ static void kvm_sched_out(struct preempt_notifier *pn,
 	kvm_arch_vcpu_put(vcpu);
 }
 
-int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
-		  struct module *module)
+int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+		      unsigned vcpu_arch_usercopy_start, unsigned vcpu_arch_usercopy_size,
+		      struct module *module)
 {
 	int r;
 	int cpu;
@@ -4006,8 +4007,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 	/* A kmem cache lets us meet the alignment requirements of fx_save. */
 	if (!vcpu_align)
 		vcpu_align = __alignof__(struct kvm_vcpu);
-	kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size, vcpu_align,
-					   SLAB_ACCOUNT, NULL);
+	kvm_vcpu_cache = kmem_cache_create_usercopy("kvm_vcpu", vcpu_size, vcpu_align,
+						    SLAB_ACCOUNT,
+						    offsetof(struct kvm_vcpu, arch) + vcpu_arch_usercopy_start,
+						    vcpu_arch_usercopy_size, NULL);
 	if (!kvm_vcpu_cache) {
 		r = -ENOMEM;
 		goto out_free_3;
@@ -4065,7 +4068,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 out_fail:
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_init);
+EXPORT_SYMBOL_GPL(kvm_init_usercopy);
 
 void kvm_exit(void)
 {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [kernel-hardening] [PATCH 1/2] KVM: allow setting a usercopy region in struct kvm_vcpu
@ 2017-10-20 23:25   ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-20 23:25 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Thomas Gleixner, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

On x86, struct kvm_vcpu has a usercopy region corresponding to the CPUID
entries.  The area is read and written by the KVM_GET/SET_CPUID2 ioctls.
Without this patch, KVM is completely broken on x86 with usercopy
hardening enabled.

Define kvm_init in terms of a more generic function that allows setting
a usercopy region.  Because x86 has separate kvm_init callers for Intel and
AMD, another variant called kvm_init_x86 passes the region corresponding
to the cpuid_entries array.

Reported-by: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
	The patch is on top of linux-next.

 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/svm.c              |  4 ++--
 arch/x86/kvm/vmx.c              |  4 ++--
 arch/x86/kvm/x86.c              | 10 ++++++++++
 include/linux/kvm_host.h        | 13 +++++++++++--
 virt/kvm/kvm_main.c             | 13 ++++++++-----
 6 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6b8f937ca398..bb8243d413d0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1420,6 +1420,9 @@ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
+int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
+	         unsigned vcpu_align, struct module *module);
+
 static inline int kvm_cpu_get_apicid(int mps_cpu)
 {
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ff94552f85d0..457433c3a703 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5594,8 +5594,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 static int __init svm_init(void)
 {
-	return kvm_init(&svm_x86_ops, sizeof(struct vcpu_svm),
-			__alignof__(struct vcpu_svm), THIS_MODULE);
+	return kvm_init_x86(&svm_x86_ops, sizeof(struct vcpu_svm),
+			    __alignof__(struct vcpu_svm), THIS_MODULE);
 }
 
 static void __exit svm_exit(void)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c460b0b439d3..6e78530df6a8 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -12106,8 +12106,8 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 static int __init vmx_init(void)
 {
-	int r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
-                     __alignof__(struct vcpu_vmx), THIS_MODULE);
+	int r = kvm_init_x86(&vmx_x86_ops, sizeof(struct vcpu_vmx),
+			     __alignof__(struct vcpu_vmx), THIS_MODULE);
 	if (r)
 		return r;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5669af09b732..415529a78c37 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8181,6 +8181,16 @@ void kvm_arch_sync_events(struct kvm *kvm)
 	kvm_free_pit(kvm);
 }
 
+int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
+		 unsigned vcpu_align, struct module *module)
+{
+	return kvm_init_usercopy(kvm_x86_ops, vcpu_size, vcpu_align,
+				 offsetof(struct kvm_vcpu_arch, cpuid_entries),
+				 sizeof_field(struct kvm_vcpu_arch, cpuid_entries),
+				 module);
+}
+EXPORT_SYMBOL_GPL(kvm_init_x86);
+
 int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 {
 	int i, r;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6882538eda32..21e19658b086 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -561,8 +561,17 @@ static inline void kvm_irqfd_exit(void)
 {
 }
 #endif
-int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
-		  struct module *module);
+
+int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+		      unsigned vcpu_usercopy_start, unsigned vcpu_usercopy_size,
+		      struct module *module);
+
+static inline int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+			   struct module *module)
+{
+	return kvm_init_usercopy(opaque, vcpu_size, vcpu_align, 0, 0, module);
+}
+
 void kvm_exit(void);
 
 void kvm_get_kvm(struct kvm *kvm);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 261c782a688f..ac889b28bb54 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3959,8 +3959,9 @@ static void kvm_sched_out(struct preempt_notifier *pn,
 	kvm_arch_vcpu_put(vcpu);
 }
 
-int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
-		  struct module *module)
+int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+		      unsigned vcpu_arch_usercopy_start, unsigned vcpu_arch_usercopy_size,
+		      struct module *module)
 {
 	int r;
 	int cpu;
@@ -4006,8 +4007,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 	/* A kmem cache lets us meet the alignment requirements of fx_save. */
 	if (!vcpu_align)
 		vcpu_align = __alignof__(struct kvm_vcpu);
-	kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size, vcpu_align,
-					   SLAB_ACCOUNT, NULL);
+	kvm_vcpu_cache = kmem_cache_create_usercopy("kvm_vcpu", vcpu_size, vcpu_align,
+						    SLAB_ACCOUNT,
+						    offsetof(struct kvm_vcpu, arch) + vcpu_arch_usercopy_start,
+						    vcpu_arch_usercopy_size, NULL);
 	if (!kvm_vcpu_cache) {
 		r = -ENOMEM;
 		goto out_free_3;
@@ -4065,7 +4068,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 out_fail:
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_init);
+EXPORT_SYMBOL_GPL(kvm_init_usercopy);
 
 void kvm_exit(void)
 {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 2/2] KVM: fix KVM_XEN_HVM_CONFIG ioctl
  2017-10-20 23:25 ` [kernel-hardening] " Paolo Bonzini
@ 2017-10-20 23:25   ` Paolo Bonzini
  -1 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-20 23:25 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras, kernel-hardening,
	Kees Cook, Radim Krčmář

This ioctl is obsolete (it was used by Xenner as far as I know) but
still let's not break it gratuitously...  Its handler is copying
directly into struct kvm.  Go through a bounce buffer instead, with
the added benefit that we can actually do something useful with the
flags argument---the previous code was exiting with -EINVAL but still
doing the copy.

This technically is a userspace ABI breakage, but since no one should be
using the ioctl, it's a good occasion to see if someone actually
complains.

Cc: kernel-hardening@lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 415529a78c37..c76d7afa30be 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4218,13 +4218,14 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		mutex_unlock(&kvm->lock);
 		break;
 	case KVM_XEN_HVM_CONFIG: {
+		struct kvm_xen_hvm_config xhc;
 		r = -EFAULT;
-		if (copy_from_user(&kvm->arch.xen_hvm_config, argp,
-				   sizeof(struct kvm_xen_hvm_config)))
+		if (copy_from_user(&xhc, argp, sizeof(xhc)))
 			goto out;
 		r = -EINVAL;
-		if (kvm->arch.xen_hvm_config.flags)
+		if (xhc.flags)
 			goto out;
+		memcpy(&kvm->arch.xen_hvm_config, &xhc, sizeof(xhc));
 		r = 0;
 		break;
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [kernel-hardening] [PATCH 2/2] KVM: fix KVM_XEN_HVM_CONFIG ioctl
@ 2017-10-20 23:25   ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-20 23:25 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras, kernel-hardening,
	Kees Cook, Radim Krčmář

This ioctl is obsolete (it was used by Xenner as far as I know) but
still let's not break it gratuitously...  Its handler is copying
directly into struct kvm.  Go through a bounce buffer instead, with
the added benefit that we can actually do something useful with the
flags argument---the previous code was exiting with -EINVAL but still
doing the copy.

This technically is a userspace ABI breakage, but since no one should be
using the ioctl, it's a good occasion to see if someone actually
complains.

Cc: kernel-hardening@lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 415529a78c37..c76d7afa30be 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4218,13 +4218,14 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		mutex_unlock(&kvm->lock);
 		break;
 	case KVM_XEN_HVM_CONFIG: {
+		struct kvm_xen_hvm_config xhc;
 		r = -EFAULT;
-		if (copy_from_user(&kvm->arch.xen_hvm_config, argp,
-				   sizeof(struct kvm_xen_hvm_config)))
+		if (copy_from_user(&xhc, argp, sizeof(xhc)))
 			goto out;
 		r = -EINVAL;
-		if (kvm->arch.xen_hvm_config.flags)
+		if (xhc.flags)
 			goto out;
+		memcpy(&kvm->arch.xen_hvm_config, &xhc, sizeof(xhc));
 		r = 0;
 		break;
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH 1/2] KVM: allow setting a usercopy region in struct kvm_vcpu
  2017-10-20 23:25   ` [kernel-hardening] " Paolo Bonzini
@ 2017-10-21 14:53     ` Kees Cook
  -1 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-21 14:53 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, KVM, Thomas Gleixner, kernel-hardening,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

On Fri, Oct 20, 2017 at 4:25 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On x86, struct kvm_vcpu has a usercopy region corresponding to the CPUID
> entries.  The area is read and written by the KVM_GET/SET_CPUID2 ioctls.
> Without this patch, KVM is completely broken on x86 with usercopy
> hardening enabled.
>
> Define kvm_init in terms of a more generic function that allows setting
> a usercopy region.  Because x86 has separate kvm_init callers for Intel and
> AMD, another variant called kvm_init_x86 passes the region corresponding
> to the cpuid_entries array.
>
> Reported-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: kernel-hardening@lists.openwall.com
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: James Hogan <james.hogan@imgtec.com>
> Cc: Paul Mackerras <paulus@samba.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>         The patch is on top of linux-next.
>
>  arch/x86/include/asm/kvm_host.h |  3 +++
>  arch/x86/kvm/svm.c              |  4 ++--
>  arch/x86/kvm/vmx.c              |  4 ++--
>  arch/x86/kvm/x86.c              | 10 ++++++++++
>  include/linux/kvm_host.h        | 13 +++++++++++--
>  virt/kvm/kvm_main.c             | 13 ++++++++-----
>  6 files changed, 36 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6b8f937ca398..bb8243d413d0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1420,6 +1420,9 @@ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>
> +int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
> +                unsigned vcpu_align, struct module *module);
> +
>  static inline int kvm_cpu_get_apicid(int mps_cpu)
>  {
>  #ifdef CONFIG_X86_LOCAL_APIC
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index ff94552f85d0..457433c3a703 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5594,8 +5594,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>
>  static int __init svm_init(void)
>  {
> -       return kvm_init(&svm_x86_ops, sizeof(struct vcpu_svm),
> -                       __alignof__(struct vcpu_svm), THIS_MODULE);
> +       return kvm_init_x86(&svm_x86_ops, sizeof(struct vcpu_svm),
> +                           __alignof__(struct vcpu_svm), THIS_MODULE);
>  }
>
>  static void __exit svm_exit(void)
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index c460b0b439d3..6e78530df6a8 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -12106,8 +12106,8 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>
>  static int __init vmx_init(void)
>  {
> -       int r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
> -                     __alignof__(struct vcpu_vmx), THIS_MODULE);
> +       int r = kvm_init_x86(&vmx_x86_ops, sizeof(struct vcpu_vmx),
> +                            __alignof__(struct vcpu_vmx), THIS_MODULE);
>         if (r)
>                 return r;
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5669af09b732..415529a78c37 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8181,6 +8181,16 @@ void kvm_arch_sync_events(struct kvm *kvm)
>         kvm_free_pit(kvm);
>  }
>
> +int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
> +                unsigned vcpu_align, struct module *module)
> +{
> +       return kvm_init_usercopy(kvm_x86_ops, vcpu_size, vcpu_align,
> +                                offsetof(struct kvm_vcpu_arch, cpuid_entries),
> +                                sizeof_field(struct kvm_vcpu_arch, cpuid_entries),
> +                                module);
> +}
> +EXPORT_SYMBOL_GPL(kvm_init_x86);
> +
>  int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
>  {
>         int i, r;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6882538eda32..21e19658b086 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -561,8 +561,17 @@ static inline void kvm_irqfd_exit(void)
>  {
>  }
>  #endif
> -int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> -                 struct module *module);
> +
> +int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +                     unsigned vcpu_usercopy_start, unsigned vcpu_usercopy_size,
> +                     struct module *module);
> +
> +static inline int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +                          struct module *module)
> +{
> +       return kvm_init_usercopy(opaque, vcpu_size, vcpu_align, 0, 0, module);
> +}
> +
>  void kvm_exit(void);
>
>  void kvm_get_kvm(struct kvm *kvm);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 261c782a688f..ac889b28bb54 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3959,8 +3959,9 @@ static void kvm_sched_out(struct preempt_notifier *pn,
>         kvm_arch_vcpu_put(vcpu);
>  }
>
> -int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> -                 struct module *module)
> +int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +                     unsigned vcpu_arch_usercopy_start, unsigned vcpu_arch_usercopy_size,
> +                     struct module *module)
>  {
>         int r;
>         int cpu;
> @@ -4006,8 +4007,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>         /* A kmem cache lets us meet the alignment requirements of fx_save. */
>         if (!vcpu_align)
>                 vcpu_align = __alignof__(struct kvm_vcpu);
> -       kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size, vcpu_align,
> -                                          SLAB_ACCOUNT, NULL);
> +       kvm_vcpu_cache = kmem_cache_create_usercopy("kvm_vcpu", vcpu_size, vcpu_align,
> +                                                   SLAB_ACCOUNT,
> +                                                   offsetof(struct kvm_vcpu, arch) + vcpu_arch_usercopy_start,
> +                                                   vcpu_arch_usercopy_size, NULL);

I adjusted this hunk for the usercopy tree (SLAB_ACCOUNT got added in
the KVM tree, I think).

-Kees

>         if (!kvm_vcpu_cache) {
>                 r = -ENOMEM;
>                 goto out_free_3;
> @@ -4065,7 +4068,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>  out_fail:
>         return r;
>  }
> -EXPORT_SYMBOL_GPL(kvm_init);
> +EXPORT_SYMBOL_GPL(kvm_init_usercopy);
>
>  void kvm_exit(void)
>  {
> --
> 2.14.2
>
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 1/2] KVM: allow setting a usercopy region in struct kvm_vcpu
@ 2017-10-21 14:53     ` Kees Cook
  0 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-21 14:53 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, KVM, Thomas Gleixner, kernel-hardening,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

On Fri, Oct 20, 2017 at 4:25 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On x86, struct kvm_vcpu has a usercopy region corresponding to the CPUID
> entries.  The area is read and written by the KVM_GET/SET_CPUID2 ioctls.
> Without this patch, KVM is completely broken on x86 with usercopy
> hardening enabled.
>
> Define kvm_init in terms of a more generic function that allows setting
> a usercopy region.  Because x86 has separate kvm_init callers for Intel and
> AMD, another variant called kvm_init_x86 passes the region corresponding
> to the cpuid_entries array.
>
> Reported-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: kernel-hardening@lists.openwall.com
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: James Hogan <james.hogan@imgtec.com>
> Cc: Paul Mackerras <paulus@samba.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>         The patch is on top of linux-next.
>
>  arch/x86/include/asm/kvm_host.h |  3 +++
>  arch/x86/kvm/svm.c              |  4 ++--
>  arch/x86/kvm/vmx.c              |  4 ++--
>  arch/x86/kvm/x86.c              | 10 ++++++++++
>  include/linux/kvm_host.h        | 13 +++++++++++--
>  virt/kvm/kvm_main.c             | 13 ++++++++-----
>  6 files changed, 36 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6b8f937ca398..bb8243d413d0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1420,6 +1420,9 @@ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>
> +int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
> +                unsigned vcpu_align, struct module *module);
> +
>  static inline int kvm_cpu_get_apicid(int mps_cpu)
>  {
>  #ifdef CONFIG_X86_LOCAL_APIC
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index ff94552f85d0..457433c3a703 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5594,8 +5594,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>
>  static int __init svm_init(void)
>  {
> -       return kvm_init(&svm_x86_ops, sizeof(struct vcpu_svm),
> -                       __alignof__(struct vcpu_svm), THIS_MODULE);
> +       return kvm_init_x86(&svm_x86_ops, sizeof(struct vcpu_svm),
> +                           __alignof__(struct vcpu_svm), THIS_MODULE);
>  }
>
>  static void __exit svm_exit(void)
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index c460b0b439d3..6e78530df6a8 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -12106,8 +12106,8 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>
>  static int __init vmx_init(void)
>  {
> -       int r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
> -                     __alignof__(struct vcpu_vmx), THIS_MODULE);
> +       int r = kvm_init_x86(&vmx_x86_ops, sizeof(struct vcpu_vmx),
> +                            __alignof__(struct vcpu_vmx), THIS_MODULE);
>         if (r)
>                 return r;
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5669af09b732..415529a78c37 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8181,6 +8181,16 @@ void kvm_arch_sync_events(struct kvm *kvm)
>         kvm_free_pit(kvm);
>  }
>
> +int kvm_init_x86(struct kvm_x86_ops *kvm_x86_ops, unsigned vcpu_size,
> +                unsigned vcpu_align, struct module *module)
> +{
> +       return kvm_init_usercopy(kvm_x86_ops, vcpu_size, vcpu_align,
> +                                offsetof(struct kvm_vcpu_arch, cpuid_entries),
> +                                sizeof_field(struct kvm_vcpu_arch, cpuid_entries),
> +                                module);
> +}
> +EXPORT_SYMBOL_GPL(kvm_init_x86);
> +
>  int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
>  {
>         int i, r;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6882538eda32..21e19658b086 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -561,8 +561,17 @@ static inline void kvm_irqfd_exit(void)
>  {
>  }
>  #endif
> -int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> -                 struct module *module);
> +
> +int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +                     unsigned vcpu_usercopy_start, unsigned vcpu_usercopy_size,
> +                     struct module *module);
> +
> +static inline int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +                          struct module *module)
> +{
> +       return kvm_init_usercopy(opaque, vcpu_size, vcpu_align, 0, 0, module);
> +}
> +
>  void kvm_exit(void);
>
>  void kvm_get_kvm(struct kvm *kvm);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 261c782a688f..ac889b28bb54 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3959,8 +3959,9 @@ static void kvm_sched_out(struct preempt_notifier *pn,
>         kvm_arch_vcpu_put(vcpu);
>  }
>
> -int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> -                 struct module *module)
> +int kvm_init_usercopy(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +                     unsigned vcpu_arch_usercopy_start, unsigned vcpu_arch_usercopy_size,
> +                     struct module *module)
>  {
>         int r;
>         int cpu;
> @@ -4006,8 +4007,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>         /* A kmem cache lets us meet the alignment requirements of fx_save. */
>         if (!vcpu_align)
>                 vcpu_align = __alignof__(struct kvm_vcpu);
> -       kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size, vcpu_align,
> -                                          SLAB_ACCOUNT, NULL);
> +       kvm_vcpu_cache = kmem_cache_create_usercopy("kvm_vcpu", vcpu_size, vcpu_align,
> +                                                   SLAB_ACCOUNT,
> +                                                   offsetof(struct kvm_vcpu, arch) + vcpu_arch_usercopy_start,
> +                                                   vcpu_arch_usercopy_size, NULL);

I adjusted this hunk for the usercopy tree (SLAB_ACCOUNT got added in
the KVM tree, I think).

-Kees

>         if (!kvm_vcpu_cache) {
>                 r = -ENOMEM;
>                 goto out_free_3;
> @@ -4065,7 +4068,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
>  out_fail:
>         return r;
>  }
> -EXPORT_SYMBOL_GPL(kvm_init);
> +EXPORT_SYMBOL_GPL(kvm_init_usercopy);
>
>  void kvm_exit(void)
>  {
> --
> 2.14.2
>
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-20 23:25 ` [kernel-hardening] " Paolo Bonzini
  (?)
@ 2017-10-21 18:45   ` Christoffer Dall
  -1 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-21 18:45 UTC (permalink / raw)
  To: kvmarm
  Cc: Kees Cook, kvm, kernel-hardening, Marc Zyngier, linux-arm-kernel,
	Paolo Bonzini

We do direct useraccess copying to the kvm_cpu_context structure
embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
state.  Everything else (timer, PMU, vgic) goes through a temporary
indirection.

Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
to out context structure.

The debug system register accesses on arm64 are modified to work through
an indirection instead.

Cc: kernel-hardening@lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.

The patch is based on linux-next plus Paolo's x86 patch which introduces
kvm_init_usercopy.  Not sure how this needs to get merged, but it would
potentially make sense for Paolo to put together a set of the patches
needed for this.

Thanks,
-Christoffer

 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++----------------
 virt/kvm/arm/arm.c        |  5 ++++-
 2 files changed, 24 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e070d3baf9f..cdf47a9108fe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -293,19 +293,20 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -335,10 +336,11 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
 
 	return 0;
 }
@@ -346,9 +348,9 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -379,19 +381,20 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
 static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -421,19 +424,20 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b9f68e4add71..639e388678ff 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
 
 static int arm_init(void)
 {
-	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
+				   offsetof(struct kvm_vcpu_arch, ctxt),
+				   sizeof_field(struct kvm_vcpu_arch, ctxt),
+				   THIS_MODULE);
 	return rc;
 }
 
-- 
2.14.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-21 18:45   ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-21 18:45 UTC (permalink / raw)
  To: linux-arm-kernel

We do direct useraccess copying to the kvm_cpu_context structure
embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
state.  Everything else (timer, PMU, vgic) goes through a temporary
indirection.

Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
to out context structure.

The debug system register accesses on arm64 are modified to work through
an indirection instead.

Cc: kernel-hardening at lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Kr?m?? <rkrcmar@redhat.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.

The patch is based on linux-next plus Paolo's x86 patch which introduces
kvm_init_usercopy.  Not sure how this needs to get merged, but it would
potentially make sense for Paolo to put together a set of the patches
needed for this.

Thanks,
-Christoffer

 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++----------------
 virt/kvm/arm/arm.c        |  5 ++++-
 2 files changed, 24 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e070d3baf9f..cdf47a9108fe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -293,19 +293,20 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -335,10 +336,11 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
 
 	return 0;
 }
@@ -346,9 +348,9 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -379,19 +381,20 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
 static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -421,19 +424,20 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b9f68e4add71..639e388678ff 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
 
 static int arm_init(void)
 {
-	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
+				   offsetof(struct kvm_vcpu_arch, ctxt),
+				   sizeof_field(struct kvm_vcpu_arch, ctxt),
+				   THIS_MODULE);
 	return rc;
 }
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [kernel-hardening] [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-21 18:45   ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-21 18:45 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, Christoffer Dall, kernel-hardening,
	Kees Cook, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier

We do direct useraccess copying to the kvm_cpu_context structure
embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
state.  Everything else (timer, PMU, vgic) goes through a temporary
indirection.

Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
to out context structure.

The debug system register accesses on arm64 are modified to work through
an indirection instead.

Cc: kernel-hardening@lists.openwall.com
Cc: Kees Cook <keescook@chromium.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.

The patch is based on linux-next plus Paolo's x86 patch which introduces
kvm_init_usercopy.  Not sure how this needs to get merged, but it would
potentially make sense for Paolo to put together a set of the patches
needed for this.

Thanks,
-Christoffer

 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++----------------
 virt/kvm/arm/arm.c        |  5 ++++-
 2 files changed, 24 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e070d3baf9f..cdf47a9108fe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -293,19 +293,20 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -335,10 +336,11 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
 
 	return 0;
 }
@@ -346,9 +348,9 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -379,19 +381,20 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
 static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
@@ -421,19 +424,20 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b9f68e4add71..639e388678ff 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
 
 static int arm_init(void)
 {
-	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
+				   offsetof(struct kvm_vcpu_arch, ctxt),
+				   sizeof_field(struct kvm_vcpu_arch, ctxt),
+				   THIS_MODULE);
 	return rc;
 }
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-21 18:45   ` Christoffer Dall
  (?)
@ 2017-10-22  3:06     ` Kees Cook
  -1 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-22  3:06 UTC (permalink / raw)
  To: Christoffer Dall, Paolo Bonzini
  Cc: KVM, kernel-hardening, Marc Zyngier, kvmarm, linux-arm-kernel

On Sat, Oct 21, 2017 at 11:45 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> We do direct useraccess copying to the kvm_cpu_context structure
> embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
> state.  Everything else (timer, PMU, vgic) goes through a temporary
> indirection.

Are these copies done with a dynamic size? The normal way these get
whitelisted is via builtin_const sizes on the copy. Looking at
KVM_REG_SIZE(), though, it seems that would be a dynamic calculation.

> Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
> like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
> to out context structure.
>
> The debug system register accesses on arm64 are modified to work through
> an indirection instead.
>
> Cc: kernel-hardening@lists.openwall.com
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
> This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.
>
> The patch is based on linux-next plus Paolo's x86 patch which introduces
> kvm_init_usercopy.  Not sure how this needs to get merged, but it would
> potentially make sense for Paolo to put together a set of the patches
> needed for this.

I was planning to carry Paolo's patches, and I can take this one too.
If this poses a problem, then I could just do a two-phase commit of
the whitelisting code, leaving the very last commit (which enables the
defense for anything not yet whitelisted), until the KVM trees land.

What's preferred?

Thanks for looking at this!

-Kees

>
> Thanks,
> -Christoffer
>
>  arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++----------------
>  virt/kvm/arm/arm.c        |  5 ++++-
>  2 files changed, 24 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2e070d3baf9f..cdf47a9108fe 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -293,19 +293,20 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
>  static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -335,10 +336,11 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
>  static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
>
>         return 0;
>  }
> @@ -346,9 +348,9 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>  static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -379,19 +381,20 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
>  static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -421,19 +424,20 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
>  static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index b9f68e4add71..639e388678ff 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
>
>  static int arm_init(void)
>  {
> -       int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +       int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
> +                                  offsetof(struct kvm_vcpu_arch, ctxt),
> +                                  sizeof_field(struct kvm_vcpu_arch, ctxt),
> +                                  THIS_MODULE);
>         return rc;
>  }
>
> --
> 2.14.2
>



-- 
Kees Cook
Pixel Security
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-22  3:06     ` Kees Cook
  0 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-22  3:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Oct 21, 2017 at 11:45 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> We do direct useraccess copying to the kvm_cpu_context structure
> embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
> state.  Everything else (timer, PMU, vgic) goes through a temporary
> indirection.

Are these copies done with a dynamic size? The normal way these get
whitelisted is via builtin_const sizes on the copy. Looking at
KVM_REG_SIZE(), though, it seems that would be a dynamic calculation.

> Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
> like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
> to out context structure.
>
> The debug system register accesses on arm64 are modified to work through
> an indirection instead.
>
> Cc: kernel-hardening at lists.openwall.com
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Kr?m?? <rkrcmar@redhat.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
> This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.
>
> The patch is based on linux-next plus Paolo's x86 patch which introduces
> kvm_init_usercopy.  Not sure how this needs to get merged, but it would
> potentially make sense for Paolo to put together a set of the patches
> needed for this.

I was planning to carry Paolo's patches, and I can take this one too.
If this poses a problem, then I could just do a two-phase commit of
the whitelisting code, leaving the very last commit (which enables the
defense for anything not yet whitelisted), until the KVM trees land.

What's preferred?

Thanks for looking at this!

-Kees

>
> Thanks,
> -Christoffer
>
>  arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++----------------
>  virt/kvm/arm/arm.c        |  5 ++++-
>  2 files changed, 24 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2e070d3baf9f..cdf47a9108fe 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -293,19 +293,20 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
>  static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -335,10 +336,11 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
>  static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
>
>         return 0;
>  }
> @@ -346,9 +348,9 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>  static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -379,19 +381,20 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
>  static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -421,19 +424,20 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
>  static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index b9f68e4add71..639e388678ff 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
>
>  static int arm_init(void)
>  {
> -       int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +       int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
> +                                  offsetof(struct kvm_vcpu_arch, ctxt),
> +                                  sizeof_field(struct kvm_vcpu_arch, ctxt),
> +                                  THIS_MODULE);
>         return rc;
>  }
>
> --
> 2.14.2
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-22  3:06     ` Kees Cook
  0 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-22  3:06 UTC (permalink / raw)
  To: Christoffer Dall, Paolo Bonzini
  Cc: kvmarm, linux-arm-kernel, KVM, kernel-hardening,
	Radim Krčmář,
	Marc Zyngier

On Sat, Oct 21, 2017 at 11:45 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> We do direct useraccess copying to the kvm_cpu_context structure
> embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
> state.  Everything else (timer, PMU, vgic) goes through a temporary
> indirection.

Are these copies done with a dynamic size? The normal way these get
whitelisted is via builtin_const sizes on the copy. Looking at
KVM_REG_SIZE(), though, it seems that would be a dynamic calculation.

> Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
> like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
> to out context structure.
>
> The debug system register accesses on arm64 are modified to work through
> an indirection instead.
>
> Cc: kernel-hardening@lists.openwall.com
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
> This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.
>
> The patch is based on linux-next plus Paolo's x86 patch which introduces
> kvm_init_usercopy.  Not sure how this needs to get merged, but it would
> potentially make sense for Paolo to put together a set of the patches
> needed for this.

I was planning to carry Paolo's patches, and I can take this one too.
If this poses a problem, then I could just do a two-phase commit of
the whitelisting code, leaving the very last commit (which enables the
defense for anything not yet whitelisted), until the KVM trees land.

What's preferred?

Thanks for looking at this!

-Kees

>
> Thanks,
> -Christoffer
>
>  arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++----------------
>  virt/kvm/arm/arm.c        |  5 ++++-
>  2 files changed, 24 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2e070d3baf9f..cdf47a9108fe 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -293,19 +293,20 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
>  static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -335,10 +336,11 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
>  static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
>
>         return 0;
>  }
> @@ -346,9 +348,9 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>  static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -379,19 +381,20 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
>  static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> @@ -421,19 +424,20 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
>  static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>                 const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
> +       __u64 r;
>
> -       if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_from_user(&r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
> +       vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
>         return 0;
>  }
>
>  static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>         const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -       __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
> +       __u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
>
> -       if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
> +       if (copy_to_user(uaddr, &r, KVM_REG_SIZE(reg->id)) != 0)
>                 return -EFAULT;
>         return 0;
>  }
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index b9f68e4add71..639e388678ff 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
>
>  static int arm_init(void)
>  {
> -       int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +       int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
> +                                  offsetof(struct kvm_vcpu_arch, ctxt),
> +                                  sizeof_field(struct kvm_vcpu_arch, ctxt),
> +                                  THIS_MODULE);
>         return rc;
>  }
>
> --
> 2.14.2
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-22  3:06     ` Kees Cook
  (?)
@ 2017-10-22  7:44       ` Christoffer Dall
  -1 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-22  7:44 UTC (permalink / raw)
  To: Kees Cook
  Cc: KVM, kernel-hardening, Marc Zyngier, Paolo Bonzini, kvmarm,
	linux-arm-kernel

On Sat, Oct 21, 2017 at 08:06:10PM -0700, Kees Cook wrote:
> On Sat, Oct 21, 2017 at 11:45 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > We do direct useraccess copying to the kvm_cpu_context structure
> > embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
> > state.  Everything else (timer, PMU, vgic) goes through a temporary
> > indirection.
> 
> Are these copies done with a dynamic size? The normal way these get
> whitelisted is via builtin_const sizes on the copy. Looking at
> KVM_REG_SIZE(), though, it seems that would be a dynamic calculation.
> 

It's super confusing, but it's actually static.

We can only get to thee functions via kvm_arm_sys_reg_get_reg() and
kvm_arm_sys_reg_set_reg(), and they both do

	if (KVM_REG_SIZE(reg->id) != sizeof(__u64))"
		return -ENOENT;

So this is always a u64 copy.  However, I think it's much clearer if I
rewrite these to use get_user() and put_user().  v2 incoming.

> > Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
> > like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
> > to out context structure.
> >
> > The debug system register accesses on arm64 are modified to work through
> > an indirection instead.
> >
> > Cc: kernel-hardening@lists.openwall.com
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> > This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.
> >
> > The patch is based on linux-next plus Paolo's x86 patch which introduces
> > kvm_init_usercopy.  Not sure how this needs to get merged, but it would
> > potentially make sense for Paolo to put together a set of the patches
> > needed for this.
> 
> I was planning to carry Paolo's patches, and I can take this one too.

Sounds good to me.

> If this poses a problem, then I could just do a two-phase commit of
> the whitelisting code, leaving the very last commit (which enables the
> defense for anything not yet whitelisted), until the KVM trees land.
> 
> What's preferred?

Assuming there's an ack from Marc Zyngier on v2 of this patch, I prefer
you just take them as part of your series.

> 
> Thanks for looking at this!
> 
No problem,
-Christoffer
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-22  7:44       ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-22  7:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Oct 21, 2017 at 08:06:10PM -0700, Kees Cook wrote:
> On Sat, Oct 21, 2017 at 11:45 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > We do direct useraccess copying to the kvm_cpu_context structure
> > embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
> > state.  Everything else (timer, PMU, vgic) goes through a temporary
> > indirection.
> 
> Are these copies done with a dynamic size? The normal way these get
> whitelisted is via builtin_const sizes on the copy. Looking at
> KVM_REG_SIZE(), though, it seems that would be a dynamic calculation.
> 

It's super confusing, but it's actually static.

We can only get to thee functions via kvm_arm_sys_reg_get_reg() and
kvm_arm_sys_reg_set_reg(), and they both do

	if (KVM_REG_SIZE(reg->id) != sizeof(__u64))"
		return -ENOENT;

So this is always a u64 copy.  However, I think it's much clearer if I
rewrite these to use get_user() and put_user().  v2 incoming.

> > Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
> > like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
> > to out context structure.
> >
> > The debug system register accesses on arm64 are modified to work through
> > an indirection instead.
> >
> > Cc: kernel-hardening at lists.openwall.com
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Kr?m?? <rkrcmar@redhat.com>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> > This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.
> >
> > The patch is based on linux-next plus Paolo's x86 patch which introduces
> > kvm_init_usercopy.  Not sure how this needs to get merged, but it would
> > potentially make sense for Paolo to put together a set of the patches
> > needed for this.
> 
> I was planning to carry Paolo's patches, and I can take this one too.

Sounds good to me.

> If this poses a problem, then I could just do a two-phase commit of
> the whitelisting code, leaving the very last commit (which enables the
> defense for anything not yet whitelisted), until the KVM trees land.
> 
> What's preferred?

Assuming there's an ack from Marc Zyngier on v2 of this patch, I prefer
you just take them as part of your series.

> 
> Thanks for looking at this!
> 
No problem,
-Christoffer

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-22  7:44       ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-22  7:44 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoffer Dall, Paolo Bonzini, kvmarm, linux-arm-kernel, KVM,
	kernel-hardening, Radim Krčmář,
	Marc Zyngier

On Sat, Oct 21, 2017 at 08:06:10PM -0700, Kees Cook wrote:
> On Sat, Oct 21, 2017 at 11:45 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > We do direct useraccess copying to the kvm_cpu_context structure
> > embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
> > state.  Everything else (timer, PMU, vgic) goes through a temporary
> > indirection.
> 
> Are these copies done with a dynamic size? The normal way these get
> whitelisted is via builtin_const sizes on the copy. Looking at
> KVM_REG_SIZE(), though, it seems that would be a dynamic calculation.
> 

It's super confusing, but it's actually static.

We can only get to thee functions via kvm_arm_sys_reg_get_reg() and
kvm_arm_sys_reg_set_reg(), and they both do

	if (KVM_REG_SIZE(reg->id) != sizeof(__u64))"
		return -ENOENT;

So this is always a u64 copy.  However, I think it's much clearer if I
rewrite these to use get_user() and put_user().  v2 incoming.

> > Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
> > like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
> > to out context structure.
> >
> > The debug system register accesses on arm64 are modified to work through
> > an indirection instead.
> >
> > Cc: kernel-hardening@lists.openwall.com
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> > This fixes KVM/ARM on today's linux next with CONFIG_HARDENED_USERCOPY.
> >
> > The patch is based on linux-next plus Paolo's x86 patch which introduces
> > kvm_init_usercopy.  Not sure how this needs to get merged, but it would
> > potentially make sense for Paolo to put together a set of the patches
> > needed for this.
> 
> I was planning to carry Paolo's patches, and I can take this one too.

Sounds good to me.

> If this poses a problem, then I could just do a two-phase commit of
> the whitelisting code, leaving the very last commit (which enables the
> defense for anything not yet whitelisted), until the KVM trees land.
> 
> What's preferred?

Assuming there's an ack from Marc Zyngier on v2 of this patch, I prefer
you just take them as part of your series.

> 
> Thanks for looking at this!
> 
No problem,
-Christoffer

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-20 23:25 ` [kernel-hardening] " Paolo Bonzini
@ 2017-10-22  7:48   ` Christoffer Dall
  -1 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-22  7:48 UTC (permalink / raw)
  To: kvmarm; +Cc: linux-arm-kernel, kvm, Christoffer Dall

We do direct useraccess copying to the kvm_cpu_context structure
embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
state.  Everything else (timer, PMU, vgic) goes through a temporary
indirection.

Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
to out context structure.

The debug system register accesses on arm64 are modified to work through
an indirection instead.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
Changes since v1:
  - Use get_user() and put_user() instead of the implicit understanding
    that these will always be 64-bit values.

 arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++++----------------
 virt/kvm/arm/arm.c        |  5 ++++-
 2 files changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e070d3baf9f..34b9e1734a3f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -293,19 +293,22 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
@@ -335,10 +338,12 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
 
 	return 0;
 }
@@ -346,9 +351,10 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
@@ -379,19 +385,22 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
 static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
@@ -421,19 +430,22 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b9f68e4add71..639e388678ff 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
 
 static int arm_init(void)
 {
-	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
+				   offsetof(struct kvm_vcpu_arch, ctxt),
+				   sizeof_field(struct kvm_vcpu_arch, ctxt),
+				   THIS_MODULE);
 	return rc;
 }
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-22  7:48   ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-22  7:48 UTC (permalink / raw)
  To: linux-arm-kernel

We do direct useraccess copying to the kvm_cpu_context structure
embedded in the kvm_vcpu_arch structure, and to the vcpu debug register
state.  Everything else (timer, PMU, vgic) goes through a temporary
indirection.

Fixing all accesses to kvm_cpu_context is massively invasive, and we'd
like to avoid that, so we tell kvm_init_usercopy to whitelist accesses
to out context structure.

The debug system register accesses on arm64 are modified to work through
an indirection instead.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
Changes since v1:
  - Use get_user() and put_user() instead of the implicit understanding
    that these will always be 64-bit values.

 arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++++----------------
 virt/kvm/arm/arm.c        |  5 ++++-
 2 files changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e070d3baf9f..34b9e1734a3f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -293,19 +293,22 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
@@ -335,10 +338,12 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = r;
 
 	return 0;
 }
@@ -346,9 +351,10 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
@@ -379,19 +385,22 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
 static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
@@ -421,19 +430,22 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 __user *uval = uaddr;
+	__u64 r;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (get_user(r, uval))
 		return -EFAULT;
+	vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = r;
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 r = vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
+	__u64 __user *uval = uaddr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	if (put_user(r, uval))
 		return -EFAULT;
 	return 0;
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b9f68e4add71..639e388678ff 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1502,7 +1502,10 @@ void kvm_arch_exit(void)
 
 static int arm_init(void)
 {
-	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	int rc = kvm_init_usercopy(NULL, sizeof(struct kvm_vcpu), 0,
+				   offsetof(struct kvm_vcpu_arch, ctxt),
+				   sizeof_field(struct kvm_vcpu_arch, ctxt),
+				   THIS_MODULE);
 	return rc;
 }
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
  2017-10-20 23:25 ` [kernel-hardening] " Paolo Bonzini
@ 2017-10-23  9:52   ` David Hildenbrand
  -1 siblings, 0 replies; 43+ messages in thread
From: David Hildenbrand @ 2017-10-23  9:52 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: kernel-hardening, Kees Cook, Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

On 21.10.2017 01:25, Paolo Bonzini wrote:
> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
> work in linux-next, which forbids copies from and to slab objects
> unless they are from kmalloc or explicitly whitelisted, breaks KVM
> completely.
> 
> This series fixes it by adding the two new usercopy arguments
> to kvm_init (more precisely to a new function kvm_init_usercopy,
> while kvm_init passes zeroes as a default).
> 
> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
> obsolete and not a big deal at all.
> 
> I'm Ccing all submaintainers in case they have something similar
> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
> pretty complex userspace API, so thorough with linux-next is highly
> recommended.

I assume on s390x, at least

kvm_arch_vcpu_ioctl_get_one_reg() and
kvm_arch_vcpu_ioctl_set_one_reg()

have to be fixed.

Christian, are you already looking into this?

> 
> Many thanks to Thomas Gleixner for reporting this to me.
> 
> Paolo
> 
> Paolo Bonzini (2):
>   KVM: allow setting a usercopy region in struct kvm_vcpu
>   KVM: fix KVM_XEN_HVM_CONFIG ioctl
> 
>  arch/x86/include/asm/kvm_host.h |  3 +++
>  arch/x86/kvm/svm.c              |  4 ++--
>  arch/x86/kvm/vmx.c              |  4 ++--
>  arch/x86/kvm/x86.c              | 17 ++++++++++++++---
>  include/linux/kvm_host.h        | 13 +++++++++++--
>  virt/kvm/kvm_main.c             | 13 ++++++++-----
>  6 files changed, 40 insertions(+), 14 deletions(-)
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-23  9:52   ` David Hildenbrand
  0 siblings, 0 replies; 43+ messages in thread
From: David Hildenbrand @ 2017-10-23  9:52 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: kernel-hardening, Kees Cook, Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	Cornelia Huck, James Hogan, Paul Mackerras

On 21.10.2017 01:25, Paolo Bonzini wrote:
> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
> work in linux-next, which forbids copies from and to slab objects
> unless they are from kmalloc or explicitly whitelisted, breaks KVM
> completely.
> 
> This series fixes it by adding the two new usercopy arguments
> to kvm_init (more precisely to a new function kvm_init_usercopy,
> while kvm_init passes zeroes as a default).
> 
> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
> obsolete and not a big deal at all.
> 
> I'm Ccing all submaintainers in case they have something similar
> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
> pretty complex userspace API, so thorough with linux-next is highly
> recommended.

I assume on s390x, at least

kvm_arch_vcpu_ioctl_get_one_reg() and
kvm_arch_vcpu_ioctl_set_one_reg()

have to be fixed.

Christian, are you already looking into this?

> 
> Many thanks to Thomas Gleixner for reporting this to me.
> 
> Paolo
> 
> Paolo Bonzini (2):
>   KVM: allow setting a usercopy region in struct kvm_vcpu
>   KVM: fix KVM_XEN_HVM_CONFIG ioctl
> 
>  arch/x86/include/asm/kvm_host.h |  3 +++
>  arch/x86/kvm/svm.c              |  4 ++--
>  arch/x86/kvm/vmx.c              |  4 ++--
>  arch/x86/kvm/x86.c              | 17 ++++++++++++++---
>  include/linux/kvm_host.h        | 13 +++++++++++--
>  virt/kvm/kvm_main.c             | 13 ++++++++-----
>  6 files changed, 40 insertions(+), 14 deletions(-)
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
  2017-10-23  9:52   ` [kernel-hardening] " David Hildenbrand
@ 2017-10-23 11:10     ` Christian Borntraeger
  -1 siblings, 0 replies; 43+ messages in thread
From: Christian Borntraeger @ 2017-10-23 11:10 UTC (permalink / raw)
  To: David Hildenbrand, Paolo Bonzini, linux-kernel, kvm
  Cc: kernel-hardening, Kees Cook, Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Cornelia Huck, James Hogan,
	Paul Mackerras



On 10/23/2017 11:52 AM, David Hildenbrand wrote:
> On 21.10.2017 01:25, Paolo Bonzini wrote:
>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>> work in linux-next, which forbids copies from and to slab objects
>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>> completely.
>>
>> This series fixes it by adding the two new usercopy arguments
>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>> while kvm_init passes zeroes as a default).
>>
>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>> obsolete and not a big deal at all.
>>
>> I'm Ccing all submaintainers in case they have something similar
>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>> pretty complex userspace API, so thorough with linux-next is highly
>> recommended.
> 
> I assume on s390x, at least
> 
> kvm_arch_vcpu_ioctl_get_one_reg() and
> kvm_arch_vcpu_ioctl_set_one_reg()
> 
> have to be fixed.
> 
> Christian, are you already looking into this?

Not yet. I am in preparation for travel.

> 
>>
>> Many thanks to Thomas Gleixner for reporting this to me.
>>
>> Paolo
>>
>> Paolo Bonzini (2):
>>   KVM: allow setting a usercopy region in struct kvm_vcpu
>>   KVM: fix KVM_XEN_HVM_CONFIG ioctl
>>
>>  arch/x86/include/asm/kvm_host.h |  3 +++
>>  arch/x86/kvm/svm.c              |  4 ++--
>>  arch/x86/kvm/vmx.c              |  4 ++--
>>  arch/x86/kvm/x86.c              | 17 ++++++++++++++---
>>  include/linux/kvm_host.h        | 13 +++++++++++--
>>  virt/kvm/kvm_main.c             | 13 ++++++++-----
>>  6 files changed, 40 insertions(+), 14 deletions(-)
>>
> 
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-23 11:10     ` Christian Borntraeger
  0 siblings, 0 replies; 43+ messages in thread
From: Christian Borntraeger @ 2017-10-23 11:10 UTC (permalink / raw)
  To: David Hildenbrand, Paolo Bonzini, linux-kernel, kvm
  Cc: kernel-hardening, Kees Cook, Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Cornelia Huck, James Hogan,
	Paul Mackerras



On 10/23/2017 11:52 AM, David Hildenbrand wrote:
> On 21.10.2017 01:25, Paolo Bonzini wrote:
>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>> work in linux-next, which forbids copies from and to slab objects
>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>> completely.
>>
>> This series fixes it by adding the two new usercopy arguments
>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>> while kvm_init passes zeroes as a default).
>>
>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>> obsolete and not a big deal at all.
>>
>> I'm Ccing all submaintainers in case they have something similar
>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>> pretty complex userspace API, so thorough with linux-next is highly
>> recommended.
> 
> I assume on s390x, at least
> 
> kvm_arch_vcpu_ioctl_get_one_reg() and
> kvm_arch_vcpu_ioctl_set_one_reg()
> 
> have to be fixed.
> 
> Christian, are you already looking into this?

Not yet. I am in preparation for travel.

> 
>>
>> Many thanks to Thomas Gleixner for reporting this to me.
>>
>> Paolo
>>
>> Paolo Bonzini (2):
>>   KVM: allow setting a usercopy region in struct kvm_vcpu
>>   KVM: fix KVM_XEN_HVM_CONFIG ioctl
>>
>>  arch/x86/include/asm/kvm_host.h |  3 +++
>>  arch/x86/kvm/svm.c              |  4 ++--
>>  arch/x86/kvm/vmx.c              |  4 ++--
>>  arch/x86/kvm/x86.c              | 17 ++++++++++++++---
>>  include/linux/kvm_host.h        | 13 +++++++++++--
>>  virt/kvm/kvm_main.c             | 13 ++++++++-----
>>  6 files changed, 40 insertions(+), 14 deletions(-)
>>
> 
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
  2017-10-23  9:52   ` [kernel-hardening] " David Hildenbrand
@ 2017-10-23 12:39     ` Cornelia Huck
  -1 siblings, 0 replies; 43+ messages in thread
From: Cornelia Huck @ 2017-10-23 12:39 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Paolo Bonzini, linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	James Hogan, Paul Mackerras

On Mon, 23 Oct 2017 11:52:51 +0200
David Hildenbrand <david@redhat.com> wrote:

> On 21.10.2017 01:25, Paolo Bonzini wrote:
> > Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
> > field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
> > work in linux-next, which forbids copies from and to slab objects
> > unless they are from kmalloc or explicitly whitelisted, breaks KVM
> > completely.
> > 
> > This series fixes it by adding the two new usercopy arguments
> > to kvm_init (more precisely to a new function kvm_init_usercopy,
> > while kvm_init passes zeroes as a default).
> > 
> > There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
> > obsolete and not a big deal at all.
> > 
> > I'm Ccing all submaintainers in case they have something similar
> > going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
> > pretty complex userspace API, so thorough with linux-next is highly
> > recommended.  
> 
> I assume on s390x, at least
> 
> kvm_arch_vcpu_ioctl_get_one_reg() and
> kvm_arch_vcpu_ioctl_set_one_reg()
> 
> have to be fixed.

At a glance, seems like it.

> 
> Christian, are you already looking into this?

I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
for any takers.

> 
> > 
> > Many thanks to Thomas Gleixner for reporting this to me.
> > 
> > Paolo
> > 
> > Paolo Bonzini (2):
> >   KVM: allow setting a usercopy region in struct kvm_vcpu
> >   KVM: fix KVM_XEN_HVM_CONFIG ioctl
> > 
> >  arch/x86/include/asm/kvm_host.h |  3 +++
> >  arch/x86/kvm/svm.c              |  4 ++--
> >  arch/x86/kvm/vmx.c              |  4 ++--
> >  arch/x86/kvm/x86.c              | 17 ++++++++++++++---
> >  include/linux/kvm_host.h        | 13 +++++++++++--
> >  virt/kvm/kvm_main.c             | 13 ++++++++-----
> >  6 files changed, 40 insertions(+), 14 deletions(-)
> >   
> 
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-23 12:39     ` Cornelia Huck
  0 siblings, 0 replies; 43+ messages in thread
From: Cornelia Huck @ 2017-10-23 12:39 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Paolo Bonzini, linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	James Hogan, Paul Mackerras

On Mon, 23 Oct 2017 11:52:51 +0200
David Hildenbrand <david@redhat.com> wrote:

> On 21.10.2017 01:25, Paolo Bonzini wrote:
> > Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
> > field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
> > work in linux-next, which forbids copies from and to slab objects
> > unless they are from kmalloc or explicitly whitelisted, breaks KVM
> > completely.
> > 
> > This series fixes it by adding the two new usercopy arguments
> > to kvm_init (more precisely to a new function kvm_init_usercopy,
> > while kvm_init passes zeroes as a default).
> > 
> > There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
> > obsolete and not a big deal at all.
> > 
> > I'm Ccing all submaintainers in case they have something similar
> > going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
> > pretty complex userspace API, so thorough with linux-next is highly
> > recommended.  
> 
> I assume on s390x, at least
> 
> kvm_arch_vcpu_ioctl_get_one_reg() and
> kvm_arch_vcpu_ioctl_set_one_reg()
> 
> have to be fixed.

At a glance, seems like it.

> 
> Christian, are you already looking into this?

I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
for any takers.

> 
> > 
> > Many thanks to Thomas Gleixner for reporting this to me.
> > 
> > Paolo
> > 
> > Paolo Bonzini (2):
> >   KVM: allow setting a usercopy region in struct kvm_vcpu
> >   KVM: fix KVM_XEN_HVM_CONFIG ioctl
> > 
> >  arch/x86/include/asm/kvm_host.h |  3 +++
> >  arch/x86/kvm/svm.c              |  4 ++--
> >  arch/x86/kvm/vmx.c              |  4 ++--
> >  arch/x86/kvm/x86.c              | 17 ++++++++++++++---
> >  include/linux/kvm_host.h        | 13 +++++++++++--
> >  virt/kvm/kvm_main.c             | 13 ++++++++-----
> >  6 files changed, 40 insertions(+), 14 deletions(-)
> >   
> 
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-22  7:44       ` Christoffer Dall
  (?)
@ 2017-10-23 14:14         ` Paolo Bonzini
  -1 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 14:14 UTC (permalink / raw)
  To: Christoffer Dall, Kees Cook
  Cc: Christoffer Dall, kvmarm, linux-arm-kernel, KVM,
	kernel-hardening, Radim Krčmář,
	Marc Zyngier

On 22/10/2017 09:44, Christoffer Dall wrote:
> However, I think it's much clearer if I
> rewrite these to use get_user() and put_user().  v2 incoming.

I'd actually prefer if you all do a trivial conversion to
kvm_init_usercopy to begin with.  In fact, we could just change the
default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
other change can be applied after the patches are merged to Linus's
tree, especially with KVM Forum and the merge window both coming soon.

I'll send a v2 myself later this week.

Thanks all,

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 14:14         ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 14:14 UTC (permalink / raw)
  To: linux-arm-kernel

On 22/10/2017 09:44, Christoffer Dall wrote:
> However, I think it's much clearer if I
> rewrite these to use get_user() and put_user().  v2 incoming.

I'd actually prefer if you all do a trivial conversion to
kvm_init_usercopy to begin with.  In fact, we could just change the
default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
other change can be applied after the patches are merged to Linus's
tree, especially with KVM Forum and the merge window both coming soon.

I'll send a v2 myself later this week.

Thanks all,

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 14:14         ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 14:14 UTC (permalink / raw)
  To: Christoffer Dall, Kees Cook
  Cc: Christoffer Dall, kvmarm, linux-arm-kernel, KVM,
	kernel-hardening, Radim Krčmář,
	Marc Zyngier

On 22/10/2017 09:44, Christoffer Dall wrote:
> However, I think it's much clearer if I
> rewrite these to use get_user() and put_user().  v2 incoming.

I'd actually prefer if you all do a trivial conversion to
kvm_init_usercopy to begin with.  In fact, we could just change the
default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
other change can be applied after the patches are merged to Linus's
tree, especially with KVM Forum and the merge window both coming soon.

I'll send a v2 myself later this week.

Thanks all,

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
  2017-10-23 12:39     ` [kernel-hardening] " Cornelia Huck
@ 2017-10-23 14:15       ` Paolo Bonzini
  -1 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 14:15 UTC (permalink / raw)
  To: Cornelia Huck, David Hildenbrand
  Cc: linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	James Hogan, Paul Mackerras

On 23/10/2017 14:39, Cornelia Huck wrote:
> On Mon, 23 Oct 2017 11:52:51 +0200
> David Hildenbrand <david@redhat.com> wrote:
> 
>> On 21.10.2017 01:25, Paolo Bonzini wrote:
>>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>>> work in linux-next, which forbids copies from and to slab objects
>>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>>> completely.
>>>
>>> This series fixes it by adding the two new usercopy arguments
>>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>>> while kvm_init passes zeroes as a default).
>>>
>>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>>> obsolete and not a big deal at all.
>>>
>>> I'm Ccing all submaintainers in case they have something similar
>>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>>> pretty complex userspace API, so thorough with linux-next is highly
>>> recommended.  
>>
>> I assume on s390x, at least
>>
>> kvm_arch_vcpu_ioctl_get_one_reg() and
>> kvm_arch_vcpu_ioctl_set_one_reg()
>>
>> have to be fixed.
> 
> At a glance, seems like it.
> 
>>
>> Christian, are you already looking into this?
> 
> I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
> for any takers.

Let's do a generic fix now, so that we don't need to rush the switch to
explicit whitelisting.

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-23 14:15       ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 14:15 UTC (permalink / raw)
  To: Cornelia Huck, David Hildenbrand
  Cc: linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	James Hogan, Paul Mackerras

On 23/10/2017 14:39, Cornelia Huck wrote:
> On Mon, 23 Oct 2017 11:52:51 +0200
> David Hildenbrand <david@redhat.com> wrote:
> 
>> On 21.10.2017 01:25, Paolo Bonzini wrote:
>>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>>> work in linux-next, which forbids copies from and to slab objects
>>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>>> completely.
>>>
>>> This series fixes it by adding the two new usercopy arguments
>>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>>> while kvm_init passes zeroes as a default).
>>>
>>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>>> obsolete and not a big deal at all.
>>>
>>> I'm Ccing all submaintainers in case they have something similar
>>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>>> pretty complex userspace API, so thorough with linux-next is highly
>>> recommended.  
>>
>> I assume on s390x, at least
>>
>> kvm_arch_vcpu_ioctl_get_one_reg() and
>> kvm_arch_vcpu_ioctl_set_one_reg()
>>
>> have to be fixed.
> 
> At a glance, seems like it.
> 
>>
>> Christian, are you already looking into this?
> 
> I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
> for any takers.

Let's do a generic fix now, so that we don't need to rush the switch to
explicit whitelisting.

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-23 14:14         ` Paolo Bonzini
  (?)
@ 2017-10-23 14:49           ` Christoffer Dall
  -1 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-23 14:49 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kees Cook, kvmarm, linux-arm-kernel, KVM, kernel-hardening,
	Radim Krčmář,
	Marc Zyngier

On Mon, Oct 23, 2017 at 4:14 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 22/10/2017 09:44, Christoffer Dall wrote:
>> However, I think it's much clearer if I
>> rewrite these to use get_user() and put_user().  v2 incoming.
>
> I'd actually prefer if you all do a trivial conversion to
> kvm_init_usercopy to begin with.  In fact, we could just change the
> default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> other change can be applied after the patches are merged to Linus's
> tree, especially with KVM Forum and the merge window both coming soon.
>
In that case, expect no further action from me on this one until the
patches have landed and I can resend my patch, unless you specifically
tell me otherwise.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 14:49           ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-23 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Oct 23, 2017 at 4:14 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 22/10/2017 09:44, Christoffer Dall wrote:
>> However, I think it's much clearer if I
>> rewrite these to use get_user() and put_user().  v2 incoming.
>
> I'd actually prefer if you all do a trivial conversion to
> kvm_init_usercopy to begin with.  In fact, we could just change the
> default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> other change can be applied after the patches are merged to Linus's
> tree, especially with KVM Forum and the merge window both coming soon.
>
In that case, expect no further action from me on this one until the
patches have landed and I can resend my patch, unless you specifically
tell me otherwise.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 14:49           ` Christoffer Dall
  0 siblings, 0 replies; 43+ messages in thread
From: Christoffer Dall @ 2017-10-23 14:49 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kees Cook, kvmarm, linux-arm-kernel, KVM, kernel-hardening,
	Radim Krčmář,
	Marc Zyngier

On Mon, Oct 23, 2017 at 4:14 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 22/10/2017 09:44, Christoffer Dall wrote:
>> However, I think it's much clearer if I
>> rewrite these to use get_user() and put_user().  v2 incoming.
>
> I'd actually prefer if you all do a trivial conversion to
> kvm_init_usercopy to begin with.  In fact, we could just change the
> default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> other change can be applied after the patches are merged to Linus's
> tree, especially with KVM Forum and the merge window both coming soon.
>
In that case, expect no further action from me on this one until the
patches have landed and I can resend my patch, unless you specifically
tell me otherwise.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-23 14:14         ` Paolo Bonzini
  (?)
@ 2017-10-23 19:40           ` Kees Cook
  -1 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-23 19:40 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Christoffer Dall, KVM, kernel-hardening, Marc Zyngier, kvmarm,
	linux-arm-kernel

On Mon, Oct 23, 2017 at 7:14 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 22/10/2017 09:44, Christoffer Dall wrote:
>> However, I think it's much clearer if I
>> rewrite these to use get_user() and put_user().  v2 incoming.
>
> I'd actually prefer if you all do a trivial conversion to
> kvm_init_usercopy to begin with.  In fact, we could just change the
> default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> other change can be applied after the patches are merged to Linus's
> tree, especially with KVM Forum and the merge window both coming soon.
>
> I'll send a v2 myself later this week.

Okay, which patches would you like me to carry in the usercopy
whitelisting tree for the coming merge window?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 19:40           ` Kees Cook
  0 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-23 19:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Oct 23, 2017 at 7:14 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 22/10/2017 09:44, Christoffer Dall wrote:
>> However, I think it's much clearer if I
>> rewrite these to use get_user() and put_user().  v2 incoming.
>
> I'd actually prefer if you all do a trivial conversion to
> kvm_init_usercopy to begin with.  In fact, we could just change the
> default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> other change can be applied after the patches are merged to Linus's
> tree, especially with KVM Forum and the merge window both coming soon.
>
> I'll send a v2 myself later this week.

Okay, which patches would you like me to carry in the usercopy
whitelisting tree for the coming merge window?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 19:40           ` Kees Cook
  0 siblings, 0 replies; 43+ messages in thread
From: Kees Cook @ 2017-10-23 19:40 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Christoffer Dall, Christoffer Dall, kvmarm, linux-arm-kernel,
	KVM, kernel-hardening, Radim Krčmář,
	Marc Zyngier

On Mon, Oct 23, 2017 at 7:14 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 22/10/2017 09:44, Christoffer Dall wrote:
>> However, I think it's much clearer if I
>> rewrite these to use get_user() and put_user().  v2 incoming.
>
> I'd actually prefer if you all do a trivial conversion to
> kvm_init_usercopy to begin with.  In fact, we could just change the
> default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> other change can be applied after the patches are merged to Linus's
> tree, especially with KVM Forum and the merge window both coming soon.
>
> I'll send a v2 myself later this week.

Okay, which patches would you like me to carry in the usercopy
whitelisting tree for the coming merge window?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* R: Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
  2017-10-23 19:40           ` Kees Cook
  (?)
@ 2017-10-23 21:06             ` Paolo Bonzini
  -1 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 21:06 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoffer Dall, Christoffer Dall, kvmarm, linux-arm-kernel,
	KVM, kernel-hardening, Radim Krčmář,
	Marc Zyngier


----- Kees Cook <keescook@chromium.org> ha scritto:
> On Mon, Oct 23, 2017 at 7:14 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > On 22/10/2017 09:44, Christoffer Dall wrote:
> >> However, I think it's much clearer if I
> >> rewrite these to use get_user() and put_user().  v2 incoming.
> >
> > I'd actually prefer if you all do a trivial conversion to
> > kvm_init_usercopy to begin with.  In fact, we could just change the
> > default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> > other change can be applied after the patches are merged to Linus's
> > tree, especially with KVM Forum and the merge window both coming soon.
> >
> > I'll send a v2 myself later this week.
> 
> Okay, which patches would you like me to carry in the usercopy
> whitelisting tree for the coming merge window?

v2 of mine, which shall come in the next couple of days.

Paolo


> 
> -Kees
> 
> -- 
> Kees Cook
> Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* R: Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 21:06             ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 21:06 UTC (permalink / raw)
  To: linux-arm-kernel


----- Kees Cook <keescook@chromium.org> ha scritto:
> On Mon, Oct 23, 2017 at 7:14 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > On 22/10/2017 09:44, Christoffer Dall wrote:
> >> However, I think it's much clearer if I
> >> rewrite these to use get_user() and put_user().  v2 incoming.
> >
> > I'd actually prefer if you all do a trivial conversion to
> > kvm_init_usercopy to begin with.  In fact, we could just change the
> > default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> > other change can be applied after the patches are merged to Linus's
> > tree, especially with KVM Forum and the merge window both coming soon.
> >
> > I'll send a v2 myself later this week.
> 
> Okay, which patches would you like me to carry in the usercopy
> whitelisting tree for the coming merge window?

v2 of mine, which shall come in the next couple of days.

Paolo


> 
> -Kees
> 
> -- 
> Kees Cook
> Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] R: Re: [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug
@ 2017-10-23 21:06             ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2017-10-23 21:06 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoffer Dall, Christoffer Dall, kvmarm, linux-arm-kernel,
	KVM, kernel-hardening, Radim Krčmář,
	Marc Zyngier


----- Kees Cook <keescook@chromium.org> ha scritto:
> On Mon, Oct 23, 2017 at 7:14 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > On 22/10/2017 09:44, Christoffer Dall wrote:
> >> However, I think it's much clearer if I
> >> rewrite these to use get_user() and put_user().  v2 incoming.
> >
> > I'd actually prefer if you all do a trivial conversion to
> > kvm_init_usercopy to begin with.  In fact, we could just change the
> > default from "0, 0" to "0, sizeof (kvm_arch_vcpu)" in kvm_init.  Any
> > other change can be applied after the patches are merged to Linus's
> > tree, especially with KVM Forum and the merge window both coming soon.
> >
> > I'll send a v2 myself later this week.
> 
> Okay, which patches would you like me to carry in the usercopy
> whitelisting tree for the coming merge window?

v2 of mine, which shall come in the next couple of days.

Paolo


> 
> -Kees
> 
> -- 
> Kees Cook
> Pixel Security

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
  2017-10-23 14:15       ` [kernel-hardening] " Paolo Bonzini
@ 2017-10-25  9:45         ` David Hildenbrand
  -1 siblings, 0 replies; 43+ messages in thread
From: David Hildenbrand @ 2017-10-25  9:45 UTC (permalink / raw)
  To: Paolo Bonzini, Cornelia Huck
  Cc: linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	James Hogan, Paul Mackerras

On 23.10.2017 16:15, Paolo Bonzini wrote:
> On 23/10/2017 14:39, Cornelia Huck wrote:
>> On Mon, 23 Oct 2017 11:52:51 +0200
>> David Hildenbrand <david@redhat.com> wrote:
>>
>>> On 21.10.2017 01:25, Paolo Bonzini wrote:
>>>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>>>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>>>> work in linux-next, which forbids copies from and to slab objects
>>>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>>>> completely.
>>>>
>>>> This series fixes it by adding the two new usercopy arguments
>>>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>>>> while kvm_init passes zeroes as a default).
>>>>
>>>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>>>> obsolete and not a big deal at all.
>>>>
>>>> I'm Ccing all submaintainers in case they have something similar
>>>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>>>> pretty complex userspace API, so thorough with linux-next is highly
>>>> recommended.  
>>>
>>> I assume on s390x, at least
>>>
>>> kvm_arch_vcpu_ioctl_get_one_reg() and
>>> kvm_arch_vcpu_ioctl_set_one_reg()
>>>
>>> have to be fixed.
>>
>> At a glance, seems like it.
>>
>>>
>>> Christian, are you already looking into this?
>>
>> I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
>> for any takers.
> 
> Let's do a generic fix now, so that we don't need to rush the switch to
> explicit whitelisting.

You mean a arch specific fix (allow writes/reads to arch) or even more
generic?

Otherwise I can you send a patch to fix these two functions.

> 
> Paolo
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-25  9:45         ` David Hildenbrand
  0 siblings, 0 replies; 43+ messages in thread
From: David Hildenbrand @ 2017-10-25  9:45 UTC (permalink / raw)
  To: Paolo Bonzini, Cornelia Huck
  Cc: linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, Christian Borntraeger,
	James Hogan, Paul Mackerras

On 23.10.2017 16:15, Paolo Bonzini wrote:
> On 23/10/2017 14:39, Cornelia Huck wrote:
>> On Mon, 23 Oct 2017 11:52:51 +0200
>> David Hildenbrand <david@redhat.com> wrote:
>>
>>> On 21.10.2017 01:25, Paolo Bonzini wrote:
>>>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>>>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>>>> work in linux-next, which forbids copies from and to slab objects
>>>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>>>> completely.
>>>>
>>>> This series fixes it by adding the two new usercopy arguments
>>>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>>>> while kvm_init passes zeroes as a default).
>>>>
>>>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>>>> obsolete and not a big deal at all.
>>>>
>>>> I'm Ccing all submaintainers in case they have something similar
>>>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>>>> pretty complex userspace API, so thorough with linux-next is highly
>>>> recommended.  
>>>
>>> I assume on s390x, at least
>>>
>>> kvm_arch_vcpu_ioctl_get_one_reg() and
>>> kvm_arch_vcpu_ioctl_set_one_reg()
>>>
>>> have to be fixed.
>>
>> At a glance, seems like it.
>>
>>>
>>> Christian, are you already looking into this?
>>
>> I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
>> for any takers.
> 
> Let's do a generic fix now, so that we don't need to rush the switch to
> explicit whitelisting.

You mean a arch specific fix (allow writes/reads to arch) or even more
generic?

Otherwise I can you send a patch to fix these two functions.

> 
> Paolo
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
  2017-10-25  9:45         ` [kernel-hardening] " David Hildenbrand
@ 2017-10-25 10:31           ` Christian Borntraeger
  -1 siblings, 0 replies; 43+ messages in thread
From: Christian Borntraeger @ 2017-10-25 10:31 UTC (permalink / raw)
  To: David Hildenbrand, Paolo Bonzini, Cornelia Huck
  Cc: linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, James Hogan, Paul Mackerras



On 10/25/2017 11:45 AM, David Hildenbrand wrote:
> On 23.10.2017 16:15, Paolo Bonzini wrote:
>> On 23/10/2017 14:39, Cornelia Huck wrote:
>>> On Mon, 23 Oct 2017 11:52:51 +0200
>>> David Hildenbrand <david@redhat.com> wrote:
>>>
>>>> On 21.10.2017 01:25, Paolo Bonzini wrote:
>>>>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>>>>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>>>>> work in linux-next, which forbids copies from and to slab objects
>>>>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>>>>> completely.
>>>>>
>>>>> This series fixes it by adding the two new usercopy arguments
>>>>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>>>>> while kvm_init passes zeroes as a default).
>>>>>
>>>>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>>>>> obsolete and not a big deal at all.
>>>>>
>>>>> I'm Ccing all submaintainers in case they have something similar
>>>>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>>>>> pretty complex userspace API, so thorough with linux-next is highly
>>>>> recommended.  
>>>>
>>>> I assume on s390x, at least
>>>>
>>>> kvm_arch_vcpu_ioctl_get_one_reg() and
>>>> kvm_arch_vcpu_ioctl_set_one_reg()
>>>>
>>>> have to be fixed.
>>>
>>> At a glance, seems like it.
>>>
>>>>
>>>> Christian, are you already looking into this?
>>>
>>> I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
>>> for any takers.
>>
>> Let's do a generic fix now, so that we don't need to rush the switch to
>> explicit whitelisting.
> 
> You mean a arch specific fix (allow writes/reads to arch) or even more
> generic?

Kees,

I am somewhat worried about these changes. The onereg interface is certainly
broken right now, but newer QEMUs will not use it. So its very likely that
testing will not find the regression. I would assume that usercopy hardinging
will introduce a lot of non-obvious regressions in seldomly used code. Have you
considered some debugging aids to find "now broken" things?
> 
> Otherwise I can you send a patch to fix these two functions.

Having said that, yes if you can send a fixup for 
kvm_arch_vcpu_ioctl_get/set_one_reg that would be great.

Christian

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [kernel-hardening] Re: [PATCH 0/2] KVM: fixes for the kernel-hardening tree
@ 2017-10-25 10:31           ` Christian Borntraeger
  0 siblings, 0 replies; 43+ messages in thread
From: Christian Borntraeger @ 2017-10-25 10:31 UTC (permalink / raw)
  To: David Hildenbrand, Paolo Bonzini, Cornelia Huck
  Cc: linux-kernel, kvm, kernel-hardening, Kees Cook,
	Radim Krčmář,
	Christoffer Dall, Marc Zyngier, James Hogan, Paul Mackerras



On 10/25/2017 11:45 AM, David Hildenbrand wrote:
> On 23.10.2017 16:15, Paolo Bonzini wrote:
>> On 23/10/2017 14:39, Cornelia Huck wrote:
>>> On Mon, 23 Oct 2017 11:52:51 +0200
>>> David Hildenbrand <david@redhat.com> wrote:
>>>
>>>> On 21.10.2017 01:25, Paolo Bonzini wrote:
>>>>> Two KVM ioctls (KVM_GET/SET_CPUID2) directly access the cpuid_entries
>>>>> field of struct kvm_vcpu_arch.  Therefore, the new usercopy hardening
>>>>> work in linux-next, which forbids copies from and to slab objects
>>>>> unless they are from kmalloc or explicitly whitelisted, breaks KVM
>>>>> completely.
>>>>>
>>>>> This series fixes it by adding the two new usercopy arguments
>>>>> to kvm_init (more precisely to a new function kvm_init_usercopy,
>>>>> while kvm_init passes zeroes as a default).
>>>>>
>>>>> There's also another broken ioctl, KVM_XEN_HVM_CONFIG, but it is
>>>>> obsolete and not a big deal at all.
>>>>>
>>>>> I'm Ccing all submaintainers in case they have something similar
>>>>> going on in their kvm_arch and kvm_vcpu_arch structs.  KVM has a
>>>>> pretty complex userspace API, so thorough with linux-next is highly
>>>>> recommended.  
>>>>
>>>> I assume on s390x, at least
>>>>
>>>> kvm_arch_vcpu_ioctl_get_one_reg() and
>>>> kvm_arch_vcpu_ioctl_set_one_reg()
>>>>
>>>> have to be fixed.
>>>
>>> At a glance, seems like it.
>>>
>>>>
>>>> Christian, are you already looking into this?
>>>
>>> I'm afraid I'm also busy with travel preparation/travel, so I'd be glad
>>> for any takers.
>>
>> Let's do a generic fix now, so that we don't need to rush the switch to
>> explicit whitelisting.
> 
> You mean a arch specific fix (allow writes/reads to arch) or even more
> generic?

Kees,

I am somewhat worried about these changes. The onereg interface is certainly
broken right now, but newer QEMUs will not use it. So its very likely that
testing will not find the regression. I would assume that usercopy hardinging
will introduce a lot of non-obvious regressions in seldomly used code. Have you
considered some debugging aids to find "now broken" things?
> 
> Otherwise I can you send a patch to fix these two functions.

Having said that, yes if you can send a fixup for 
kvm_arch_vcpu_ioctl_get/set_one_reg that would be great.

Christian

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2017-10-25 10:31 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-20 23:25 [PATCH 0/2] KVM: fixes for the kernel-hardening tree Paolo Bonzini
2017-10-20 23:25 ` [kernel-hardening] " Paolo Bonzini
2017-10-20 23:25 ` [PATCH 1/2] KVM: allow setting a usercopy region in struct kvm_vcpu Paolo Bonzini
2017-10-20 23:25   ` [kernel-hardening] " Paolo Bonzini
2017-10-21 14:53   ` Kees Cook
2017-10-21 14:53     ` [kernel-hardening] " Kees Cook
2017-10-20 23:25 ` [PATCH 2/2] KVM: fix KVM_XEN_HVM_CONFIG ioctl Paolo Bonzini
2017-10-20 23:25   ` [kernel-hardening] " Paolo Bonzini
2017-10-21 18:45 ` [PATCH] KVM: arm/arm64: Allow usercopy to vcpu->arch.ctxt and arm64 debug Christoffer Dall
2017-10-21 18:45   ` [kernel-hardening] " Christoffer Dall
2017-10-21 18:45   ` Christoffer Dall
2017-10-22  3:06   ` Kees Cook
2017-10-22  3:06     ` [kernel-hardening] " Kees Cook
2017-10-22  3:06     ` Kees Cook
2017-10-22  7:44     ` Christoffer Dall
2017-10-22  7:44       ` [kernel-hardening] " Christoffer Dall
2017-10-22  7:44       ` Christoffer Dall
2017-10-23 14:14       ` Paolo Bonzini
2017-10-23 14:14         ` [kernel-hardening] " Paolo Bonzini
2017-10-23 14:14         ` Paolo Bonzini
2017-10-23 14:49         ` Christoffer Dall
2017-10-23 14:49           ` [kernel-hardening] " Christoffer Dall
2017-10-23 14:49           ` Christoffer Dall
2017-10-23 19:40         ` Kees Cook
2017-10-23 19:40           ` [kernel-hardening] " Kees Cook
2017-10-23 19:40           ` Kees Cook
2017-10-23 21:06           ` R: " Paolo Bonzini
2017-10-23 21:06             ` [kernel-hardening] " Paolo Bonzini
2017-10-23 21:06             ` Paolo Bonzini
2017-10-22  7:48 ` [PATCH v2] " Christoffer Dall
2017-10-22  7:48   ` Christoffer Dall
2017-10-23  9:52 ` [PATCH 0/2] KVM: fixes for the kernel-hardening tree David Hildenbrand
2017-10-23  9:52   ` [kernel-hardening] " David Hildenbrand
2017-10-23 11:10   ` Christian Borntraeger
2017-10-23 11:10     ` [kernel-hardening] " Christian Borntraeger
2017-10-23 12:39   ` Cornelia Huck
2017-10-23 12:39     ` [kernel-hardening] " Cornelia Huck
2017-10-23 14:15     ` Paolo Bonzini
2017-10-23 14:15       ` [kernel-hardening] " Paolo Bonzini
2017-10-25  9:45       ` David Hildenbrand
2017-10-25  9:45         ` [kernel-hardening] " David Hildenbrand
2017-10-25 10:31         ` Christian Borntraeger
2017-10-25 10:31           ` [kernel-hardening] " Christian Borntraeger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.