All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] MIPS: Fix MSA ld unaligned failure cases
@ 2016-02-03  3:35 ` Paul Burton
  0 siblings, 0 replies; 4+ messages in thread
From: Paul Burton @ 2016-02-03  3:35 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, stable # v4 . 3, Leonid Yegoshin,
	Maciej W. Rozycki, linux-kernel, James Cowgill, Markos Chandras

Copying the content of an MSA vector from user memory may involve TLB
faults & mapping in pages. This will fail when preemption is disabled
due to an inability to acquire mmap_sem from do_page_fault, which meant
such vector loads to unmapped pages would always fail to be emulated.
Fix this by disabling preemption later only around the updating of
vector register state.

This change does however introduce a race between performing the load
into thread context & the thread being preempted, saving its current
live context & clobbering the loaded value. This should be a rare
occureence, so optimise for the fast path by simply repeating the load if
we are preempted.

Additionally if the copy failed then the failure path was taken with
preemption left disabled, leading to the kernel typically encountering
further issues around sleeping whilst atomic. The change to where
preemption is disabled avoids this issue.

Fixes: e4aa1f153add "MIPS: MSA unaligned memory access support"
Reported-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.3

---

 arch/mips/kernel/unaligned.c | 51 ++++++++++++++++++++++++++------------------
 1 file changed, 30 insertions(+), 21 deletions(-)

diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
index 490cea5..5c62065 100644
--- a/arch/mips/kernel/unaligned.c
+++ b/arch/mips/kernel/unaligned.c
@@ -885,7 +885,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
 {
 	union mips_instruction insn;
 	unsigned long value;
-	unsigned int res;
+	unsigned int res, preempted;
 	unsigned long origpc;
 	unsigned long orig31;
 	void __user *fault_addr = NULL;
@@ -1226,27 +1226,36 @@ static void emulate_load_store_insn(struct pt_regs *regs,
 			if (!access_ok(VERIFY_READ, addr, sizeof(*fpr)))
 				goto sigbus;
 
-			/*
-			 * Disable preemption to avoid a race between copying
-			 * state from userland, migrating to another CPU and
-			 * updating the hardware vector register below.
-			 */
-			preempt_disable();
-
-			res = __copy_from_user_inatomic(fpr, addr,
-							sizeof(*fpr));
-			if (res)
-				goto fault;
-
-			/*
-			 * Update the hardware register if it is in use by the
-			 * task in this quantum, in order to avoid having to
-			 * save & restore the whole vector context.
-			 */
-			if (test_thread_flag(TIF_USEDMSA))
-				write_msa_wr(wd, fpr, df);
+			do {
+				/*
+				 * If we have live MSA context keep track of
+				 * whether we get preempted in order to avoid
+				 * the register context we load being clobbered
+				 * by the live context as it's saved during
+				 * preemption. If we don't have live context
+				 * then it can't be saved to clobber the value
+				 * we load.
+				 */
+				preempted = test_thread_flag(TIF_USEDMSA);
+
+				res = __copy_from_user_inatomic(fpr, addr,
+								sizeof(*fpr));
+				if (res)
+					goto fault;
 
-			preempt_enable();
+				/*
+				 * Update the hardware register if it is in use
+				 * by the task in this quantum, in order to
+				 * avoid having to save & restore the whole
+				 * vector context.
+				 */
+				preempt_disable();
+				if (test_thread_flag(TIF_USEDMSA)) {
+					write_msa_wr(wd, fpr, df);
+					preempted = 0;
+				}
+				preempt_enable();
+			} while (preempted);
 			break;
 
 		case msa_st_op:
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH] MIPS: Fix MSA ld unaligned failure cases
@ 2016-02-03  3:35 ` Paul Burton
  0 siblings, 0 replies; 4+ messages in thread
From: Paul Burton @ 2016-02-03  3:35 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, stable # v4 . 3, Leonid Yegoshin,
	Maciej W. Rozycki, linux-kernel, James Cowgill, Markos Chandras

Copying the content of an MSA vector from user memory may involve TLB
faults & mapping in pages. This will fail when preemption is disabled
due to an inability to acquire mmap_sem from do_page_fault, which meant
such vector loads to unmapped pages would always fail to be emulated.
Fix this by disabling preemption later only around the updating of
vector register state.

This change does however introduce a race between performing the load
into thread context & the thread being preempted, saving its current
live context & clobbering the loaded value. This should be a rare
occureence, so optimise for the fast path by simply repeating the load if
we are preempted.

Additionally if the copy failed then the failure path was taken with
preemption left disabled, leading to the kernel typically encountering
further issues around sleeping whilst atomic. The change to where
preemption is disabled avoids this issue.

Fixes: e4aa1f153add "MIPS: MSA unaligned memory access support"
Reported-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.3

---

 arch/mips/kernel/unaligned.c | 51 ++++++++++++++++++++++++++------------------
 1 file changed, 30 insertions(+), 21 deletions(-)

diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
index 490cea5..5c62065 100644
--- a/arch/mips/kernel/unaligned.c
+++ b/arch/mips/kernel/unaligned.c
@@ -885,7 +885,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
 {
 	union mips_instruction insn;
 	unsigned long value;
-	unsigned int res;
+	unsigned int res, preempted;
 	unsigned long origpc;
 	unsigned long orig31;
 	void __user *fault_addr = NULL;
@@ -1226,27 +1226,36 @@ static void emulate_load_store_insn(struct pt_regs *regs,
 			if (!access_ok(VERIFY_READ, addr, sizeof(*fpr)))
 				goto sigbus;
 
-			/*
-			 * Disable preemption to avoid a race between copying
-			 * state from userland, migrating to another CPU and
-			 * updating the hardware vector register below.
-			 */
-			preempt_disable();
-
-			res = __copy_from_user_inatomic(fpr, addr,
-							sizeof(*fpr));
-			if (res)
-				goto fault;
-
-			/*
-			 * Update the hardware register if it is in use by the
-			 * task in this quantum, in order to avoid having to
-			 * save & restore the whole vector context.
-			 */
-			if (test_thread_flag(TIF_USEDMSA))
-				write_msa_wr(wd, fpr, df);
+			do {
+				/*
+				 * If we have live MSA context keep track of
+				 * whether we get preempted in order to avoid
+				 * the register context we load being clobbered
+				 * by the live context as it's saved during
+				 * preemption. If we don't have live context
+				 * then it can't be saved to clobber the value
+				 * we load.
+				 */
+				preempted = test_thread_flag(TIF_USEDMSA);
+
+				res = __copy_from_user_inatomic(fpr, addr,
+								sizeof(*fpr));
+				if (res)
+					goto fault;
 
-			preempt_enable();
+				/*
+				 * Update the hardware register if it is in use
+				 * by the task in this quantum, in order to
+				 * avoid having to save & restore the whole
+				 * vector context.
+				 */
+				preempt_disable();
+				if (test_thread_flag(TIF_USEDMSA)) {
+					write_msa_wr(wd, fpr, df);
+					preempted = 0;
+				}
+				preempt_enable();
+			} while (preempted);
 			break;
 
 		case msa_st_op:
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] MIPS: Fix MSA ld unaligned failure cases
@ 2016-02-03 14:44   ` James Hogan
  0 siblings, 0 replies; 4+ messages in thread
From: James Hogan @ 2016-02-03 14:44 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, Ralf Baechle, stable # v4 . 3, Leonid Yegoshin,
	Maciej W. Rozycki, linux-kernel, James Cowgill, Markos Chandras

[-- Attachment #1: Type: text/plain, Size: 3795 bytes --]

On Wed, Feb 03, 2016 at 03:35:49AM +0000, Paul Burton wrote:
> Copying the content of an MSA vector from user memory may involve TLB
> faults & mapping in pages. This will fail when preemption is disabled
> due to an inability to acquire mmap_sem from do_page_fault, which meant
> such vector loads to unmapped pages would always fail to be emulated.
> Fix this by disabling preemption later only around the updating of
> vector register state.
> 
> This change does however introduce a race between performing the load
> into thread context & the thread being preempted, saving its current
> live context & clobbering the loaded value. This should be a rare
> occureence, so optimise for the fast path by simply repeating the load if
> we are preempted.
> 
> Additionally if the copy failed then the failure path was taken with
> preemption left disabled, leading to the kernel typically encountering
> further issues around sleeping whilst atomic. The change to where
> preemption is disabled avoids this issue.
> 
> Fixes: e4aa1f153add "MIPS: MSA unaligned memory access support"
> Reported-by: James Hogan <james.hogan@imgtec.com>
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> Cc: stable <stable@vger.kernel.org> # v4.3

Reviewed-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

> 
> ---
> 
>  arch/mips/kernel/unaligned.c | 51 ++++++++++++++++++++++++++------------------
>  1 file changed, 30 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
> index 490cea5..5c62065 100644
> --- a/arch/mips/kernel/unaligned.c
> +++ b/arch/mips/kernel/unaligned.c
> @@ -885,7 +885,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
>  {
>  	union mips_instruction insn;
>  	unsigned long value;
> -	unsigned int res;
> +	unsigned int res, preempted;
>  	unsigned long origpc;
>  	unsigned long orig31;
>  	void __user *fault_addr = NULL;
> @@ -1226,27 +1226,36 @@ static void emulate_load_store_insn(struct pt_regs *regs,
>  			if (!access_ok(VERIFY_READ, addr, sizeof(*fpr)))
>  				goto sigbus;
>  
> -			/*
> -			 * Disable preemption to avoid a race between copying
> -			 * state from userland, migrating to another CPU and
> -			 * updating the hardware vector register below.
> -			 */
> -			preempt_disable();
> -
> -			res = __copy_from_user_inatomic(fpr, addr,
> -							sizeof(*fpr));
> -			if (res)
> -				goto fault;
> -
> -			/*
> -			 * Update the hardware register if it is in use by the
> -			 * task in this quantum, in order to avoid having to
> -			 * save & restore the whole vector context.
> -			 */
> -			if (test_thread_flag(TIF_USEDMSA))
> -				write_msa_wr(wd, fpr, df);
> +			do {
> +				/*
> +				 * If we have live MSA context keep track of
> +				 * whether we get preempted in order to avoid
> +				 * the register context we load being clobbered
> +				 * by the live context as it's saved during
> +				 * preemption. If we don't have live context
> +				 * then it can't be saved to clobber the value
> +				 * we load.
> +				 */
> +				preempted = test_thread_flag(TIF_USEDMSA);
> +
> +				res = __copy_from_user_inatomic(fpr, addr,
> +								sizeof(*fpr));
> +				if (res)
> +					goto fault;
>  
> -			preempt_enable();
> +				/*
> +				 * Update the hardware register if it is in use
> +				 * by the task in this quantum, in order to
> +				 * avoid having to save & restore the whole
> +				 * vector context.
> +				 */
> +				preempt_disable();
> +				if (test_thread_flag(TIF_USEDMSA)) {
> +					write_msa_wr(wd, fpr, df);
> +					preempted = 0;
> +				}
> +				preempt_enable();
> +			} while (preempted);
>  			break;
>  
>  		case msa_st_op:
> -- 
> 2.7.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] MIPS: Fix MSA ld unaligned failure cases
@ 2016-02-03 14:44   ` James Hogan
  0 siblings, 0 replies; 4+ messages in thread
From: James Hogan @ 2016-02-03 14:44 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, Ralf Baechle, stable # v4 . 3, Leonid Yegoshin,
	Maciej W. Rozycki, linux-kernel, James Cowgill, Markos Chandras

[-- Attachment #1: Type: text/plain, Size: 3795 bytes --]

On Wed, Feb 03, 2016 at 03:35:49AM +0000, Paul Burton wrote:
> Copying the content of an MSA vector from user memory may involve TLB
> faults & mapping in pages. This will fail when preemption is disabled
> due to an inability to acquire mmap_sem from do_page_fault, which meant
> such vector loads to unmapped pages would always fail to be emulated.
> Fix this by disabling preemption later only around the updating of
> vector register state.
> 
> This change does however introduce a race between performing the load
> into thread context & the thread being preempted, saving its current
> live context & clobbering the loaded value. This should be a rare
> occureence, so optimise for the fast path by simply repeating the load if
> we are preempted.
> 
> Additionally if the copy failed then the failure path was taken with
> preemption left disabled, leading to the kernel typically encountering
> further issues around sleeping whilst atomic. The change to where
> preemption is disabled avoids this issue.
> 
> Fixes: e4aa1f153add "MIPS: MSA unaligned memory access support"
> Reported-by: James Hogan <james.hogan@imgtec.com>
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> Cc: stable <stable@vger.kernel.org> # v4.3

Reviewed-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

> 
> ---
> 
>  arch/mips/kernel/unaligned.c | 51 ++++++++++++++++++++++++++------------------
>  1 file changed, 30 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
> index 490cea5..5c62065 100644
> --- a/arch/mips/kernel/unaligned.c
> +++ b/arch/mips/kernel/unaligned.c
> @@ -885,7 +885,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
>  {
>  	union mips_instruction insn;
>  	unsigned long value;
> -	unsigned int res;
> +	unsigned int res, preempted;
>  	unsigned long origpc;
>  	unsigned long orig31;
>  	void __user *fault_addr = NULL;
> @@ -1226,27 +1226,36 @@ static void emulate_load_store_insn(struct pt_regs *regs,
>  			if (!access_ok(VERIFY_READ, addr, sizeof(*fpr)))
>  				goto sigbus;
>  
> -			/*
> -			 * Disable preemption to avoid a race between copying
> -			 * state from userland, migrating to another CPU and
> -			 * updating the hardware vector register below.
> -			 */
> -			preempt_disable();
> -
> -			res = __copy_from_user_inatomic(fpr, addr,
> -							sizeof(*fpr));
> -			if (res)
> -				goto fault;
> -
> -			/*
> -			 * Update the hardware register if it is in use by the
> -			 * task in this quantum, in order to avoid having to
> -			 * save & restore the whole vector context.
> -			 */
> -			if (test_thread_flag(TIF_USEDMSA))
> -				write_msa_wr(wd, fpr, df);
> +			do {
> +				/*
> +				 * If we have live MSA context keep track of
> +				 * whether we get preempted in order to avoid
> +				 * the register context we load being clobbered
> +				 * by the live context as it's saved during
> +				 * preemption. If we don't have live context
> +				 * then it can't be saved to clobber the value
> +				 * we load.
> +				 */
> +				preempted = test_thread_flag(TIF_USEDMSA);
> +
> +				res = __copy_from_user_inatomic(fpr, addr,
> +								sizeof(*fpr));
> +				if (res)
> +					goto fault;
>  
> -			preempt_enable();
> +				/*
> +				 * Update the hardware register if it is in use
> +				 * by the task in this quantum, in order to
> +				 * avoid having to save & restore the whole
> +				 * vector context.
> +				 */
> +				preempt_disable();
> +				if (test_thread_flag(TIF_USEDMSA)) {
> +					write_msa_wr(wd, fpr, df);
> +					preempted = 0;
> +				}
> +				preempt_enable();
> +			} while (preempted);
>  			break;
>  
>  		case msa_st_op:
> -- 
> 2.7.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-02-03 14:44 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-03  3:35 [PATCH] MIPS: Fix MSA ld unaligned failure cases Paul Burton
2016-02-03  3:35 ` Paul Burton
2016-02-03 14:44 ` James Hogan
2016-02-03 14:44   ` James Hogan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.