All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] KVM: PPC: Book3S HV: kvmhv_copy_tofrom_guest_radix changes
@ 2021-08-05 21:26 ` Fabiano Rosas
  0 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

This series contains the fix for __kvmhv_copy_tofrom_guest_radix plus
a couple of changes.

- Patch 1: The fix itself. I thought a smaller fix upfront would be
           better to facilitate any backports.

- Patch 2: Adds a sanity check to the low level function. Since the
           hcall API already enforces that the effective address must
           be on quadrant 0, moving the checks from the high level
           function would only add overhead to the hcall path, so I
           opted for a lightweight check at the low level.

- Patch 3: Cleans up the EXPORT_SYMBOL_GPL tags. I don't see how they
           would be needed since the functions are contained within
           kvm-hv.ko.

v1:

https://lkml.kernel.org/r/20210802234941.2568493-1-farosas@linux.ibm.com

Fabiano Rosas (3):
  KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
  KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
  KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix

 arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 0/3] KVM: PPC: Book3S HV: kvmhv_copy_tofrom_guest_radix changes
@ 2021-08-05 21:26 ` Fabiano Rosas
  0 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

This series contains the fix for __kvmhv_copy_tofrom_guest_radix plus
a couple of changes.

- Patch 1: The fix itself. I thought a smaller fix upfront would be
           better to facilitate any backports.

- Patch 2: Adds a sanity check to the low level function. Since the
           hcall API already enforces that the effective address must
           be on quadrant 0, moving the checks from the high level
           function would only add overhead to the hcall path, so I
           opted for a lightweight check at the low level.

- Patch 3: Cleans up the EXPORT_SYMBOL_GPL tags. I don't see how they
           would be needed since the functions are contained within
           kvm-hv.ko.

v1:

https://lkml.kernel.org/r/20210802234941.2568493-1-farosas@linux.ibm.com

Fabiano Rosas (3):
  KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
  KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
  KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix

 arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

-- 
2.29.2

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
  2021-08-05 21:26 ` Fabiano Rosas
@ 2021-08-05 21:26   ` Fabiano Rosas
  -1 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

The __kvmhv_copy_tofrom_guest_radix function was introduced along with
nested HV guest support. It uses the platform's Radix MMU quadrants to
provide a nested hypervisor with fast access to its nested guests
memory (H_COPY_TOFROM_GUEST hypercall). It has also since been added
as a fast path for the kvmppc_ld/st routines which are used during
instruction emulation.

The commit def0bfdbd603 ("powerpc: use probe_user_read() and
probe_user_write()") changed the low level copy function from
raw_copy_from_user to probe_user_read, which adds a check to
access_ok. In powerpc that is:

 static inline bool __access_ok(unsigned long addr, unsigned long size)
 {
         return addr < TASK_SIZE_MAX && size <= TASK_SIZE_MAX - addr;
 }

and TASK_SIZE_MAX is 0x0010000000000000UL for 64-bit, which means that
setting the two MSBs of the effective address (which correspond to the
quadrant) now cause access_ok to reject the access.

This was not caught earlier because the most common code path via
kvmppc_ld/st contains a fallback (kvm_read_guest) that is likely to
succeed for L1 guests. For nested guests there is no fallback.

Another issue is that probe_user_read (now __copy_from_user_nofault)
does not return the number of bytes not copied in case of failure, so
the destination memory is not being cleared anymore in
kvmhv_copy_from_guest_radix:

 ret = kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, to, NULL, n);
 if (ret > 0)                            <-- always false!
         memset(to + (n - ret), 0, ret);

This patch fixes both issues by skipping access_ok and open-coding the
low level __copy_to/from_user_inatomic.

Fixes: def0bfdbd603 ("powerpc: use probe_user_read() and probe_user_write()")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index b5905ae4377c..44eb7b1ef289 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -65,10 +65,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 	}
 	isync();
 
+	pagefault_disable();
 	if (is_load)
-		ret = copy_from_user_nofault(to, (const void __user *)from, n);
+		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
 	else
-		ret = copy_to_user_nofault((void __user *)to, from, n);
+		ret = __copy_to_user_inatomic((void __user *)to, from, n);
+	pagefault_enable();
 
 	/* switch the pid first to avoid running host with unallocated pid */
 	if (quadrant == 1 && pid != old_pid)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
@ 2021-08-05 21:26   ` Fabiano Rosas
  0 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

The __kvmhv_copy_tofrom_guest_radix function was introduced along with
nested HV guest support. It uses the platform's Radix MMU quadrants to
provide a nested hypervisor with fast access to its nested guests
memory (H_COPY_TOFROM_GUEST hypercall). It has also since been added
as a fast path for the kvmppc_ld/st routines which are used during
instruction emulation.

The commit def0bfdbd603 ("powerpc: use probe_user_read() and
probe_user_write()") changed the low level copy function from
raw_copy_from_user to probe_user_read, which adds a check to
access_ok. In powerpc that is:

 static inline bool __access_ok(unsigned long addr, unsigned long size)
 {
         return addr < TASK_SIZE_MAX && size <= TASK_SIZE_MAX - addr;
 }

and TASK_SIZE_MAX is 0x0010000000000000UL for 64-bit, which means that
setting the two MSBs of the effective address (which correspond to the
quadrant) now cause access_ok to reject the access.

This was not caught earlier because the most common code path via
kvmppc_ld/st contains a fallback (kvm_read_guest) that is likely to
succeed for L1 guests. For nested guests there is no fallback.

Another issue is that probe_user_read (now __copy_from_user_nofault)
does not return the number of bytes not copied in case of failure, so
the destination memory is not being cleared anymore in
kvmhv_copy_from_guest_radix:

 ret = kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, to, NULL, n);
 if (ret > 0)                            <-- always false!
         memset(to + (n - ret), 0, ret);

This patch fixes both issues by skipping access_ok and open-coding the
low level __copy_to/from_user_inatomic.

Fixes: def0bfdbd603 ("powerpc: use probe_user_read() and probe_user_write()")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index b5905ae4377c..44eb7b1ef289 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -65,10 +65,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 	}
 	isync();
 
+	pagefault_disable();
 	if (is_load)
-		ret = copy_from_user_nofault(to, (const void __user *)from, n);
+		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
 	else
-		ret = copy_to_user_nofault((void __user *)to, from, n);
+		ret = __copy_to_user_inatomic((void __user *)to, from, n);
+	pagefault_enable();
 
 	/* switch the pid first to avoid running host with unallocated pid */
 	if (quadrant = 1 && pid != old_pid)
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
  2021-08-05 21:26 ` Fabiano Rosas
@ 2021-08-05 21:26   ` Fabiano Rosas
  -1 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

Both paths into __kvmhv_copy_tofrom_guest_radix ensure that we arrive
with an effective address that is smaller than our total addressable
space and addresses quadrant 0.

- The H_COPY_TOFROM_GUEST hypercall path rejects the call with
H_PARAMETER if the effective address has any of the twelve most
significant bits set.

- The kvmhv_copy_tofrom_guest_radix path clears the top twelve bits
before calling the internal function.

Although the callers make sure that the effective address is sane, any
future use of the function is exposed to a programming error, so add a
sanity check.

Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 44eb7b1ef289..1b1c9e9e539b 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -44,6 +44,9 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 					  (to != NULL) ? __pa(to): 0,
 					  (from != NULL) ? __pa(from): 0, n);
 
+	if (eaddr & (0xFFFUL << 52))
+		return ret;
+
 	quadrant = 1;
 	if (!pid)
 		quadrant = 2;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
@ 2021-08-05 21:26   ` Fabiano Rosas
  0 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

Both paths into __kvmhv_copy_tofrom_guest_radix ensure that we arrive
with an effective address that is smaller than our total addressable
space and addresses quadrant 0.

- The H_COPY_TOFROM_GUEST hypercall path rejects the call with
H_PARAMETER if the effective address has any of the twelve most
significant bits set.

- The kvmhv_copy_tofrom_guest_radix path clears the top twelve bits
before calling the internal function.

Although the callers make sure that the effective address is sane, any
future use of the function is exposed to a programming error, so add a
sanity check.

Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 44eb7b1ef289..1b1c9e9e539b 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -44,6 +44,9 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 					  (to != NULL) ? __pa(to): 0,
 					  (from != NULL) ? __pa(from): 0, n);
 
+	if (eaddr & (0xFFFUL << 52))
+		return ret;
+
 	quadrant = 1;
 	if (!pid)
 		quadrant = 2;
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/3] KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix
  2021-08-05 21:26 ` Fabiano Rosas
@ 2021-08-05 21:26   ` Fabiano Rosas
  -1 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

The book3s_64_mmu_radix.o object is not part of the KVM builtins and
all the callers of the exported symbols are in the same kvm-hv.ko
module so we should not need to export any symbols.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 1b1c9e9e539b..16359525a40f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -86,7 +86,6 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(__kvmhv_copy_tofrom_guest_radix);
 
 static long kvmhv_copy_tofrom_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  void *to, void *from, unsigned long n)
@@ -122,14 +121,12 @@ long kvmhv_copy_from_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *to,
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(kvmhv_copy_from_guest_radix);
 
 long kvmhv_copy_to_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *from,
 			       unsigned long n)
 {
 	return kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, NULL, from, n);
 }
-EXPORT_SYMBOL_GPL(kvmhv_copy_to_guest_radix);
 
 int kvmppc_mmu_walk_radix_tree(struct kvm_vcpu *vcpu, gva_t eaddr,
 			       struct kvmppc_pte *gpte, u64 root,
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/3] KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix
@ 2021-08-05 21:26   ` Fabiano Rosas
  0 siblings, 0 replies; 14+ messages in thread
From: Fabiano Rosas @ 2021-08-05 21:26 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, christophe.leroy, npiggin

The book3s_64_mmu_radix.o object is not part of the KVM builtins and
all the callers of the exported symbols are in the same kvm-hv.ko
module so we should not need to export any symbols.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 1b1c9e9e539b..16359525a40f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -86,7 +86,6 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(__kvmhv_copy_tofrom_guest_radix);
 
 static long kvmhv_copy_tofrom_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  void *to, void *from, unsigned long n)
@@ -122,14 +121,12 @@ long kvmhv_copy_from_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *to,
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(kvmhv_copy_from_guest_radix);
 
 long kvmhv_copy_to_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *from,
 			       unsigned long n)
 {
 	return kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, NULL, from, n);
 }
-EXPORT_SYMBOL_GPL(kvmhv_copy_to_guest_radix);
 
 int kvmppc_mmu_walk_radix_tree(struct kvm_vcpu *vcpu, gva_t eaddr,
 			       struct kvmppc_pte *gpte, u64 root,
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
  2021-08-05 21:26   ` Fabiano Rosas
@ 2021-08-06 10:09     ` Nicholas Piggin
  -1 siblings, 0 replies; 14+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:09 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: christophe.leroy, linuxppc-dev

Excerpts from Fabiano Rosas's message of August 6, 2021 7:26 am:
> The __kvmhv_copy_tofrom_guest_radix function was introduced along with
> nested HV guest support. It uses the platform's Radix MMU quadrants to
> provide a nested hypervisor with fast access to its nested guests
> memory (H_COPY_TOFROM_GUEST hypercall). It has also since been added
> as a fast path for the kvmppc_ld/st routines which are used during
> instruction emulation.
> 
> The commit def0bfdbd603 ("powerpc: use probe_user_read() and
> probe_user_write()") changed the low level copy function from
> raw_copy_from_user to probe_user_read, which adds a check to
> access_ok. In powerpc that is:
> 
>  static inline bool __access_ok(unsigned long addr, unsigned long size)
>  {
>          return addr < TASK_SIZE_MAX && size <= TASK_SIZE_MAX - addr;
>  }
> 
> and TASK_SIZE_MAX is 0x0010000000000000UL for 64-bit, which means that
> setting the two MSBs of the effective address (which correspond to the
> quadrant) now cause access_ok to reject the access.
> 
> This was not caught earlier because the most common code path via
> kvmppc_ld/st contains a fallback (kvm_read_guest) that is likely to
> succeed for L1 guests. For nested guests there is no fallback.
> 
> Another issue is that probe_user_read (now __copy_from_user_nofault)
> does not return the number of bytes not copied in case of failure, so
> the destination memory is not being cleared anymore in
> kvmhv_copy_from_guest_radix:
> 
>  ret = kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, to, NULL, n);
>  if (ret > 0)                            <-- always false!
>          memset(to + (n - ret), 0, ret);
> 
> This patch fixes both issues by skipping access_ok and open-coding the
> low level __copy_to/from_user_inatomic.
> 
> Fixes: def0bfdbd603 ("powerpc: use probe_user_read() and probe_user_write()")

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index b5905ae4377c..44eb7b1ef289 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -65,10 +65,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
>  	}
>  	isync();
>  
> +	pagefault_disable();
>  	if (is_load)
> -		ret = copy_from_user_nofault(to, (const void __user *)from, n);
> +		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
>  	else
> -		ret = copy_to_user_nofault((void __user *)to, from, n);
> +		ret = __copy_to_user_inatomic((void __user *)to, from, n);
> +	pagefault_enable();
>  
>  	/* switch the pid first to avoid running host with unallocated pid */
>  	if (quadrant == 1 && pid != old_pid)
> -- 
> 2.29.2
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
@ 2021-08-06 10:09     ` Nicholas Piggin
  0 siblings, 0 replies; 14+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:09 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: christophe.leroy, linuxppc-dev

Excerpts from Fabiano Rosas's message of August 6, 2021 7:26 am:
> The __kvmhv_copy_tofrom_guest_radix function was introduced along with
> nested HV guest support. It uses the platform's Radix MMU quadrants to
> provide a nested hypervisor with fast access to its nested guests
> memory (H_COPY_TOFROM_GUEST hypercall). It has also since been added
> as a fast path for the kvmppc_ld/st routines which are used during
> instruction emulation.
> 
> The commit def0bfdbd603 ("powerpc: use probe_user_read() and
> probe_user_write()") changed the low level copy function from
> raw_copy_from_user to probe_user_read, which adds a check to
> access_ok. In powerpc that is:
> 
>  static inline bool __access_ok(unsigned long addr, unsigned long size)
>  {
>          return addr < TASK_SIZE_MAX && size <= TASK_SIZE_MAX - addr;
>  }
> 
> and TASK_SIZE_MAX is 0x0010000000000000UL for 64-bit, which means that
> setting the two MSBs of the effective address (which correspond to the
> quadrant) now cause access_ok to reject the access.
> 
> This was not caught earlier because the most common code path via
> kvmppc_ld/st contains a fallback (kvm_read_guest) that is likely to
> succeed for L1 guests. For nested guests there is no fallback.
> 
> Another issue is that probe_user_read (now __copy_from_user_nofault)
> does not return the number of bytes not copied in case of failure, so
> the destination memory is not being cleared anymore in
> kvmhv_copy_from_guest_radix:
> 
>  ret = kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, to, NULL, n);
>  if (ret > 0)                            <-- always false!
>          memset(to + (n - ret), 0, ret);
> 
> This patch fixes both issues by skipping access_ok and open-coding the
> low level __copy_to/from_user_inatomic.
> 
> Fixes: def0bfdbd603 ("powerpc: use probe_user_read() and probe_user_write()")

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index b5905ae4377c..44eb7b1ef289 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -65,10 +65,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
>  	}
>  	isync();
>  
> +	pagefault_disable();
>  	if (is_load)
> -		ret = copy_from_user_nofault(to, (const void __user *)from, n);
> +		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
>  	else
> -		ret = copy_to_user_nofault((void __user *)to, from, n);
> +		ret = __copy_to_user_inatomic((void __user *)to, from, n);
> +	pagefault_enable();
>  
>  	/* switch the pid first to avoid running host with unallocated pid */
>  	if (quadrant == 1 && pid != old_pid)
> -- 
> 2.29.2
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
  2021-08-05 21:26   ` Fabiano Rosas
@ 2021-08-06 10:10     ` Nicholas Piggin
  -1 siblings, 0 replies; 14+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:10 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: christophe.leroy, linuxppc-dev

Excerpts from Fabiano Rosas's message of August 6, 2021 7:26 am:
> Both paths into __kvmhv_copy_tofrom_guest_radix ensure that we arrive
> with an effective address that is smaller than our total addressable
> space and addresses quadrant 0.
> 
> - The H_COPY_TOFROM_GUEST hypercall path rejects the call with
> H_PARAMETER if the effective address has any of the twelve most
> significant bits set.
> 
> - The kvmhv_copy_tofrom_guest_radix path clears the top twelve bits
> before calling the internal function.
> 
> Although the callers make sure that the effective address is sane, any
> future use of the function is exposed to a programming error, so add a
> sanity check.

We possibly should put these into #defines in radix pgtable headers 
somewhere but KVM already open codes them so this is good for now.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

> 
> Suggested-by: Nicholas Piggin <npiggin@gmail.com>
> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_64_mmu_radix.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index 44eb7b1ef289..1b1c9e9e539b 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -44,6 +44,9 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
>  					  (to != NULL) ? __pa(to): 0,
>  					  (from != NULL) ? __pa(from): 0, n);
>  
> +	if (eaddr & (0xFFFUL << 52))
> +		return ret;
> +
>  	quadrant = 1;
>  	if (!pid)
>  		quadrant = 2;
> -- 
> 2.29.2
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
@ 2021-08-06 10:10     ` Nicholas Piggin
  0 siblings, 0 replies; 14+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:10 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: christophe.leroy, linuxppc-dev

Excerpts from Fabiano Rosas's message of August 6, 2021 7:26 am:
> Both paths into __kvmhv_copy_tofrom_guest_radix ensure that we arrive
> with an effective address that is smaller than our total addressable
> space and addresses quadrant 0.
> 
> - The H_COPY_TOFROM_GUEST hypercall path rejects the call with
> H_PARAMETER if the effective address has any of the twelve most
> significant bits set.
> 
> - The kvmhv_copy_tofrom_guest_radix path clears the top twelve bits
> before calling the internal function.
> 
> Although the callers make sure that the effective address is sane, any
> future use of the function is exposed to a programming error, so add a
> sanity check.

We possibly should put these into #defines in radix pgtable headers 
somewhere but KVM already open codes them so this is good for now.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

> 
> Suggested-by: Nicholas Piggin <npiggin@gmail.com>
> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_64_mmu_radix.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index 44eb7b1ef289..1b1c9e9e539b 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -44,6 +44,9 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
>  					  (to != NULL) ? __pa(to): 0,
>  					  (from != NULL) ? __pa(from): 0, n);
>  
> +	if (eaddr & (0xFFFUL << 52))
> +		return ret;
> +
>  	quadrant = 1;
>  	if (!pid)
>  		quadrant = 2;
> -- 
> 2.29.2
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 0/3] KVM: PPC: Book3S HV: kvmhv_copy_tofrom_guest_radix changes
  2021-08-05 21:26 ` Fabiano Rosas
@ 2021-08-27 13:15   ` Michael Ellerman
  -1 siblings, 0 replies; 14+ messages in thread
From: Michael Ellerman @ 2021-08-27 13:15 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: christophe.leroy, linuxppc-dev, npiggin

On Thu, 5 Aug 2021 18:26:13 -0300, Fabiano Rosas wrote:
> This series contains the fix for __kvmhv_copy_tofrom_guest_radix plus
> a couple of changes.
> 
> - Patch 1: The fix itself. I thought a smaller fix upfront would be
>            better to facilitate any backports.
> 
> - Patch 2: Adds a sanity check to the low level function. Since the
>            hcall API already enforces that the effective address must
>            be on quadrant 0, moving the checks from the high level
>            function would only add overhead to the hcall path, so I
>            opted for a lightweight check at the low level.
> 
> [...]

Applied to powerpc/next.

[1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
      https://git.kernel.org/powerpc/c/5d7d6dac8fe99ed59eee2300e4a03370f94d5222
[2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
      https://git.kernel.org/powerpc/c/c232461c0c3b0aca637f0d7658a7f5d0bb393a4e
[3/3] KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix
      https://git.kernel.org/powerpc/c/0eb596f1e6103ebe122792a425b88c5dc21c4087

cheers

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 0/3] KVM: PPC: Book3S HV: kvmhv_copy_tofrom_guest_radix changes
@ 2021-08-27 13:15   ` Michael Ellerman
  0 siblings, 0 replies; 14+ messages in thread
From: Michael Ellerman @ 2021-08-27 13:15 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: christophe.leroy, linuxppc-dev, npiggin

On Thu, 5 Aug 2021 18:26:13 -0300, Fabiano Rosas wrote:
> This series contains the fix for __kvmhv_copy_tofrom_guest_radix plus
> a couple of changes.
> 
> - Patch 1: The fix itself. I thought a smaller fix upfront would be
>            better to facilitate any backports.
> 
> - Patch 2: Adds a sanity check to the low level function. Since the
>            hcall API already enforces that the effective address must
>            be on quadrant 0, moving the checks from the high level
>            function would only add overhead to the hcall path, so I
>            opted for a lightweight check at the low level.
> 
> [...]

Applied to powerpc/next.

[1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
      https://git.kernel.org/powerpc/c/5d7d6dac8fe99ed59eee2300e4a03370f94d5222
[2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest
      https://git.kernel.org/powerpc/c/c232461c0c3b0aca637f0d7658a7f5d0bb393a4e
[3/3] KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix
      https://git.kernel.org/powerpc/c/0eb596f1e6103ebe122792a425b88c5dc21c4087

cheers

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-08-27 13:28 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-05 21:26 [PATCH v2 0/3] KVM: PPC: Book3S HV: kvmhv_copy_tofrom_guest_radix changes Fabiano Rosas
2021-08-05 21:26 ` Fabiano Rosas
2021-08-05 21:26 ` [PATCH v2 1/3] KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines Fabiano Rosas
2021-08-05 21:26   ` Fabiano Rosas
2021-08-06 10:09   ` Nicholas Piggin
2021-08-06 10:09     ` Nicholas Piggin
2021-08-05 21:26 ` [PATCH v2 2/3] KVM: PPC: Book3S HV: Add sanity check to copy_tofrom_guest Fabiano Rosas
2021-08-05 21:26   ` Fabiano Rosas
2021-08-06 10:10   ` Nicholas Piggin
2021-08-06 10:10     ` Nicholas Piggin
2021-08-05 21:26 ` [PATCH v2 3/3] KVM: PPC: Book3S HV: Stop exporting symbols from book3s_64_mmu_radix Fabiano Rosas
2021-08-05 21:26   ` Fabiano Rosas
2021-08-27 13:15 ` [PATCH v2 0/3] KVM: PPC: Book3S HV: kvmhv_copy_tofrom_guest_radix changes Michael Ellerman
2021-08-27 13:15   ` Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.