All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] metag/usercopy: Support RAW_COPY_USER
@ 2017-04-04 14:46 ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag, Al Viro; +Cc: James Hogan, stable

These patches, based on viro-vfs/uaccess.metag, add support for
RAW_COPY_USER to the metag architecture.

Al: Did you want these to go via your uaccess.metag branch? If so I'm
happy to create a topic branch with the fixes (patches 1-2, marked for
stable) for you to pull, so I can get them upstream for 4.11.

- Patch 1 drops some unused macros so they don't need fixing in patch 2.

- Patch 2 moves zeroing out to copy_from_user, since __copy_from_user
  shouldn't cause zeroing, and the existing fixup code didn't properly
  zero the access that first failed anyway. This is needed for patch 3.

- Patch 3 switches metag to using RAW_COPY_USER, which is fairly trivial
  now that patch 2 moved the zeroing out to copy_from_user.

I have a bunch of other usercopy fixes for metag, but it won't hurt for
them to go in separately to these patches.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org

James Hogan (3):
  metag/usercopy: Drop unused macros
  metag/usercopy: Zero rest of buffer from copy_from_user
  metag/usercopy: Switch to RAW_COPY_USER

 arch/metag/Kconfig               |   1 +-
 arch/metag/include/asm/uaccess.h |  34 +------
 arch/metag/lib/usercopy.c        | 171 +++-----------------------------
 3 files changed, 26 insertions(+), 180 deletions(-)

-- 
git-series 0.8.10

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/3] metag/usercopy: Support RAW_COPY_USER
@ 2017-04-04 14:46 ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag, Al Viro; +Cc: James Hogan, stable

These patches, based on viro-vfs/uaccess.metag, add support for
RAW_COPY_USER to the metag architecture.

Al: Did you want these to go via your uaccess.metag branch? If so I'm
happy to create a topic branch with the fixes (patches 1-2, marked for
stable) for you to pull, so I can get them upstream for 4.11.

- Patch 1 drops some unused macros so they don't need fixing in patch 2.

- Patch 2 moves zeroing out to copy_from_user, since __copy_from_user
  shouldn't cause zeroing, and the existing fixup code didn't properly
  zero the access that first failed anyway. This is needed for patch 3.

- Patch 3 switches metag to using RAW_COPY_USER, which is fairly trivial
  now that patch 2 moved the zeroing out to copy_from_user.

I have a bunch of other usercopy fixes for metag, but it won't hurt for
them to go in separately to these patches.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org

James Hogan (3):
  metag/usercopy: Drop unused macros
  metag/usercopy: Zero rest of buffer from copy_from_user
  metag/usercopy: Switch to RAW_COPY_USER

 arch/metag/Kconfig               |   1 +-
 arch/metag/include/asm/uaccess.h |  34 +------
 arch/metag/lib/usercopy.c        | 171 +++-----------------------------
 3 files changed, 26 insertions(+), 180 deletions(-)

-- 
git-series 0.8.10

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] metag/usercopy: Drop unused macros
       [not found] ` <cover.67b8034e25703fa0c68295f1ac1d7abe4bd1fac9.1491316836.git-series.james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
@ 2017-04-04 14:46   ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag, Al Viro; +Cc: James Hogan, stable

Metag's lib/usercopy.c has a bunch of copy_from_user macros for larger
copies between 5 and 16 bytes which are completely unused. Before fixing
zeroing lets drop these macros so there is less to fix.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
---
 arch/metag/lib/usercopy.c | 113 +---------------------------------------
 1 file changed, 0 insertions(+), 113 deletions(-)

diff --git a/arch/metag/lib/usercopy.c b/arch/metag/lib/usercopy.c
index b3ebfe9c8e88..b4eb1f17069f 100644
--- a/arch/metag/lib/usercopy.c
+++ b/arch/metag/lib/usercopy.c
@@ -651,119 +651,6 @@ EXPORT_SYMBOL(__copy_user);
 #define __asm_copy_from_user_4(to, from, ret) \
 	__asm_copy_from_user_4x_cont(to, from, ret, "", "", "")
 
-#define __asm_copy_from_user_5(to, from, ret) \
-	__asm_copy_from_user_4x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"4:	SETB [%0++],D1Ar1\n",		\
-		"5:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 4b,5b\n")
-
-#define __asm_copy_from_user_6x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_4x_cont(to, from, ret,	\
-		"	GETW D1Ar1,[%1++]\n"		\
-		"4:	SETW [%0++],D1Ar1\n" COPY,	\
-		"5:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 4b,5b\n" TENTRY)
-
-#define __asm_copy_from_user_6(to, from, ret) \
-	__asm_copy_from_user_6x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_7(to, from, ret) \
-	__asm_copy_from_user_6x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"6:	SETB [%0++],D1Ar1\n",		\
-		"7:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 6b,7b\n")
-
-#define __asm_copy_from_user_8x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_4x_cont(to, from, ret,	\
-		"	GETD D1Ar1,[%1++]\n"		\
-		"4:	SETD [%0++],D1Ar1\n" COPY,	\
-		"5:	ADD  %2,%2,#4\n"			\
-		"	SETD [%0++],D1Ar1\n" FIXUP,		\
-		"	.long 4b,5b\n" TENTRY)
-
-#define __asm_copy_from_user_8(to, from, ret) \
-	__asm_copy_from_user_8x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_9(to, from, ret) \
-	__asm_copy_from_user_8x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"6:	SETB [%0++],D1Ar1\n",		\
-		"7:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 6b,7b\n")
-
-#define __asm_copy_from_user_10x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_8x_cont(to, from, ret,	\
-		"	GETW D1Ar1,[%1++]\n"		\
-		"6:	SETW [%0++],D1Ar1\n" COPY,	\
-		"7:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 6b,7b\n" TENTRY)
-
-#define __asm_copy_from_user_10(to, from, ret) \
-	__asm_copy_from_user_10x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_11(to, from, ret)		\
-	__asm_copy_from_user_10x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"8:	SETB [%0++],D1Ar1\n",		\
-		"9:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 8b,9b\n")
-
-#define __asm_copy_from_user_12x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_8x_cont(to, from, ret,	\
-		"	GETD D1Ar1,[%1++]\n"		\
-		"6:	SETD [%0++],D1Ar1\n" COPY,	\
-		"7:	ADD  %2,%2,#4\n"		\
-		"	SETD [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 6b,7b\n" TENTRY)
-
-#define __asm_copy_from_user_12(to, from, ret) \
-	__asm_copy_from_user_12x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_13(to, from, ret) \
-	__asm_copy_from_user_12x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"8:	SETB [%0++],D1Ar1\n",		\
-		"9:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 8b,9b\n")
-
-#define __asm_copy_from_user_14x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_12x_cont(to, from, ret,	\
-		"	GETW D1Ar1,[%1++]\n"		\
-		"8:	SETW [%0++],D1Ar1\n" COPY,	\
-		"9:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 8b,9b\n" TENTRY)
-
-#define __asm_copy_from_user_14(to, from, ret) \
-	__asm_copy_from_user_14x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_15(to, from, ret) \
-	__asm_copy_from_user_14x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"10:	SETB [%0++],D1Ar1\n",		\
-		"11:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 10b,11b\n")
-
-#define __asm_copy_from_user_16x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_12x_cont(to, from, ret,	\
-		"	GETD D1Ar1,[%1++]\n"		\
-		"8:	SETD [%0++],D1Ar1\n" COPY,	\
-		"9:	ADD  %2,%2,#4\n"		\
-		"	SETD [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 8b,9b\n" TENTRY)
-
-#define __asm_copy_from_user_16(to, from, ret) \
-	__asm_copy_from_user_16x_cont(to, from, ret, "", "", "")
 
 #define __asm_copy_from_user_8x64(to, from, ret) \
 	asm volatile (				\
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 1/3] metag/usercopy: Drop unused macros
@ 2017-04-04 14:46   ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag-u79uwXL29TY76Z2rM5mHXA, Al Viro
  Cc: James Hogan, stable-u79uwXL29TY76Z2rM5mHXA

Metag's lib/usercopy.c has a bunch of copy_from_user macros for larger
copies between 5 and 16 bytes which are completely unused. Before fixing
zeroing lets drop these macros so there is less to fix.

Signed-off-by: James Hogan <james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
Cc: Al Viro <viro-RmSDqhL/yNMiFSDQTTA3OLVCufUGDwFn@public.gmane.org>
Cc: linux-metag-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
---
 arch/metag/lib/usercopy.c | 113 +---------------------------------------
 1 file changed, 0 insertions(+), 113 deletions(-)

diff --git a/arch/metag/lib/usercopy.c b/arch/metag/lib/usercopy.c
index b3ebfe9c8e88..b4eb1f17069f 100644
--- a/arch/metag/lib/usercopy.c
+++ b/arch/metag/lib/usercopy.c
@@ -651,119 +651,6 @@ EXPORT_SYMBOL(__copy_user);
 #define __asm_copy_from_user_4(to, from, ret) \
 	__asm_copy_from_user_4x_cont(to, from, ret, "", "", "")
 
-#define __asm_copy_from_user_5(to, from, ret) \
-	__asm_copy_from_user_4x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"4:	SETB [%0++],D1Ar1\n",		\
-		"5:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 4b,5b\n")
-
-#define __asm_copy_from_user_6x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_4x_cont(to, from, ret,	\
-		"	GETW D1Ar1,[%1++]\n"		\
-		"4:	SETW [%0++],D1Ar1\n" COPY,	\
-		"5:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 4b,5b\n" TENTRY)
-
-#define __asm_copy_from_user_6(to, from, ret) \
-	__asm_copy_from_user_6x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_7(to, from, ret) \
-	__asm_copy_from_user_6x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"6:	SETB [%0++],D1Ar1\n",		\
-		"7:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 6b,7b\n")
-
-#define __asm_copy_from_user_8x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_4x_cont(to, from, ret,	\
-		"	GETD D1Ar1,[%1++]\n"		\
-		"4:	SETD [%0++],D1Ar1\n" COPY,	\
-		"5:	ADD  %2,%2,#4\n"			\
-		"	SETD [%0++],D1Ar1\n" FIXUP,		\
-		"	.long 4b,5b\n" TENTRY)
-
-#define __asm_copy_from_user_8(to, from, ret) \
-	__asm_copy_from_user_8x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_9(to, from, ret) \
-	__asm_copy_from_user_8x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"6:	SETB [%0++],D1Ar1\n",		\
-		"7:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 6b,7b\n")
-
-#define __asm_copy_from_user_10x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_8x_cont(to, from, ret,	\
-		"	GETW D1Ar1,[%1++]\n"		\
-		"6:	SETW [%0++],D1Ar1\n" COPY,	\
-		"7:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 6b,7b\n" TENTRY)
-
-#define __asm_copy_from_user_10(to, from, ret) \
-	__asm_copy_from_user_10x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_11(to, from, ret)		\
-	__asm_copy_from_user_10x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"8:	SETB [%0++],D1Ar1\n",		\
-		"9:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 8b,9b\n")
-
-#define __asm_copy_from_user_12x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_8x_cont(to, from, ret,	\
-		"	GETD D1Ar1,[%1++]\n"		\
-		"6:	SETD [%0++],D1Ar1\n" COPY,	\
-		"7:	ADD  %2,%2,#4\n"		\
-		"	SETD [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 6b,7b\n" TENTRY)
-
-#define __asm_copy_from_user_12(to, from, ret) \
-	__asm_copy_from_user_12x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_13(to, from, ret) \
-	__asm_copy_from_user_12x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"8:	SETB [%0++],D1Ar1\n",		\
-		"9:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 8b,9b\n")
-
-#define __asm_copy_from_user_14x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_12x_cont(to, from, ret,	\
-		"	GETW D1Ar1,[%1++]\n"		\
-		"8:	SETW [%0++],D1Ar1\n" COPY,	\
-		"9:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 8b,9b\n" TENTRY)
-
-#define __asm_copy_from_user_14(to, from, ret) \
-	__asm_copy_from_user_14x_cont(to, from, ret, "", "", "")
-
-#define __asm_copy_from_user_15(to, from, ret) \
-	__asm_copy_from_user_14x_cont(to, from, ret,	\
-		"	GETB D1Ar1,[%1++]\n"		\
-		"10:	SETB [%0++],D1Ar1\n",		\
-		"11:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
-		"	.long 10b,11b\n")
-
-#define __asm_copy_from_user_16x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
-	__asm_copy_from_user_12x_cont(to, from, ret,	\
-		"	GETD D1Ar1,[%1++]\n"		\
-		"8:	SETD [%0++],D1Ar1\n" COPY,	\
-		"9:	ADD  %2,%2,#4\n"		\
-		"	SETD [%0++],D1Ar1\n" FIXUP,	\
-		"	.long 8b,9b\n" TENTRY)
-
-#define __asm_copy_from_user_16(to, from, ret) \
-	__asm_copy_from_user_16x_cont(to, from, ret, "", "", "")
 
 #define __asm_copy_from_user_8x64(to, from, ret) \
 	asm volatile (				\
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user
  2017-04-04 14:46 ` James Hogan
@ 2017-04-04 14:46   ` James Hogan
  -1 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag, Al Viro; +Cc: James Hogan, stable

Currently we try to zero the destination for a failed read from userland
in fixup code in the usercopy.c macros. The rest of the destination
buffer is then zeroed from __copy_user_zeroing(), which is used for both
copy_from_user() and __copy_from_user().

Unfortunately we fail to zero in the fixup code as D1Ar1 is set to 0
before the fixup code entry labels, and __copy_from_user() shouldn't even
be zeroing the rest of the buffer.

Move the zeroing out into copy_from_user() and rename
__copy_user_zeroing() to raw_copy_from_user() since it no longer does
any zeroing. This also conveniently matches the name needed for
RAW_COPY_USER support in a later patch.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
---
 arch/metag/include/asm/uaccess.h | 15 +++++-----
 arch/metag/lib/usercopy.c        | 52 +++++++++++----------------------
 2 files changed, 26 insertions(+), 41 deletions(-)

diff --git a/arch/metag/include/asm/uaccess.h b/arch/metag/include/asm/uaccess.h
index 2613d91bde1a..61805d9f9375 100644
--- a/arch/metag/include/asm/uaccess.h
+++ b/arch/metag/include/asm/uaccess.h
@@ -172,20 +172,21 @@ extern long __must_check strnlen_user(const char __user *src, long count);
 
 #define strlen_user(str) strnlen_user(str, 32767)
 
-extern unsigned long __must_check __copy_user_zeroing(void *to,
-						      const void __user *from,
-						      unsigned long n);
+extern unsigned long raw_copy_from_user(void *to, const void __user *from,
+					unsigned long n);
 
 static inline unsigned long
 copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	unsigned long res = n;
 	if (likely(access_ok(VERIFY_READ, from, n)))
-		return __copy_user_zeroing(to, from, n);
-	memset(to, 0, n);
-	return n;
+		res = raw_copy_from_user(to, from, n);
+	if (unlikely(res))
+		memset(to + (n - res), 0, res);
+	return res;
 }
 
-#define __copy_from_user(to, from, n) __copy_user_zeroing(to, from, n)
+#define __copy_from_user(to, from, n) raw_copy_from_user(to, from, n)
 #define __copy_from_user_inatomic __copy_from_user
 
 extern unsigned long __must_check __copy_user(void __user *to,
diff --git a/arch/metag/lib/usercopy.c b/arch/metag/lib/usercopy.c
index b4eb1f17069f..bb1f7d60fcc0 100644
--- a/arch/metag/lib/usercopy.c
+++ b/arch/metag/lib/usercopy.c
@@ -29,7 +29,6 @@
 		COPY						 \
 		"1:\n"						 \
 		"	.section .fixup,\"ax\"\n"		 \
-		"	MOV D1Ar1,#0\n"				 \
 		FIXUP						 \
 		"	MOVT    D1Ar1,#HI(1b)\n"		 \
 		"	JUMP    D1Ar1,#LO(1b)\n"		 \
@@ -618,7 +617,7 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETB D1Ar1,[%1++]\n"	\
 		"2:	SETB [%0++],D1Ar1\n",	\
 		"3:	ADD  %2,%2,#1\n"	\
-		"	SETB [%0++],D1Ar1\n",	\
+		"	ADD  %0,%0,#1\n",	\
 		"	.long 2b,3b\n")
 
 #define __asm_copy_from_user_2x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
@@ -626,7 +625,7 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETW D1Ar1,[%1++]\n"		\
 		"2:	SETW [%0++],D1Ar1\n" COPY,	\
 		"3:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
+		"	ADD  %0,%0,#2\n" FIXUP,		\
 		"	.long 2b,3b\n" TENTRY)
 
 #define __asm_copy_from_user_2(to, from, ret) \
@@ -637,7 +636,7 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETB D1Ar1,[%1++]\n"		\
 		"4:	SETB [%0++],D1Ar1\n",		\
 		"5:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
+		"	ADD  %0,%0,#1\n",		\
 		"	.long 4b,5b\n")
 
 #define __asm_copy_from_user_4x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
@@ -645,23 +644,20 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETD D1Ar1,[%1++]\n"		\
 		"2:	SETD [%0++],D1Ar1\n" COPY,	\
 		"3:	ADD  %2,%2,#4\n"		\
-		"	SETD [%0++],D1Ar1\n" FIXUP,	\
+		"	ADD  %0,%0,#4\n" FIXUP,		\
 		"	.long 2b,3b\n" TENTRY)
 
 #define __asm_copy_from_user_4(to, from, ret) \
 	__asm_copy_from_user_4x_cont(to, from, ret, "", "", "")
 
-
 #define __asm_copy_from_user_8x64(to, from, ret) \
 	asm volatile (				\
 		"	GETL D0Ar2,D1Ar1,[%1++]\n"	\
 		"2:	SETL [%0++],D0Ar2,D1Ar1\n"	\
 		"1:\n"					\
 		"	.section .fixup,\"ax\"\n"	\
-		"	MOV D1Ar1,#0\n"			\
-		"	MOV D0Ar2,#0\n"			\
 		"3:	ADD  %2,%2,#8\n"		\
-		"	SETL [%0++],D0Ar2,D1Ar1\n"	\
+		"	ADD  %0,%0,#8\n"		\
 		"	MOVT    D0Ar2,#HI(1b)\n"	\
 		"	JUMP    D0Ar2,#LO(1b)\n"	\
 		"	.previous\n"			\
@@ -701,11 +697,12 @@ EXPORT_SYMBOL(__copy_user);
 		"SUB	%1, %1, #4\n")
 
 
-/* Copy from user to kernel, zeroing the bytes that were inaccessible in
-   userland.  The return-value is the number of bytes that were
-   inaccessible.  */
-unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
-				  unsigned long n)
+/*
+ * Copy from user to kernel. The return-value is the number of bytes that were
+ * inaccessible.
+ */
+unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
+				 unsigned long n)
 {
 	register char *dst asm ("A0.2") = pdst;
 	register const char __user *src asm ("A1.2") = psrc;
@@ -724,7 +721,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_1(dst, src, retn);
 			n--;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 	if (((unsigned long) src & 2) && n >= 2) {
@@ -737,7 +734,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_2(dst, src, retn);
 			n -= 2;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 
@@ -745,7 +742,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 	   because if both adjustments were done, either both or
 	   neither reference had an exception.  */
 	if (retn != 0)
-		goto copy_exception_bytes;
+		return retn + n;
 
 #ifdef USE_RAPF
 	/* 64 bit copy loop */
@@ -759,7 +756,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_8x64(dst, src, retn);
 			n -= 8;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 
@@ -775,7 +772,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_8x64(dst, src, retn);
 			n -= 8;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 #endif
@@ -785,7 +782,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 		n -= 4;
 
 		if (retn)
-			goto copy_exception_bytes;
+			return retn + n;
 	}
 
 	/* If we get here, there were no memory read faults.  */
@@ -811,21 +808,8 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 	/* If we get here, retn correctly reflects the number of failing
 	   bytes.  */
 	return retn;
-
- copy_exception_bytes:
-	/* We already have "retn" bytes cleared, and need to clear the
-	   remaining "n" bytes.  A non-optimized simple byte-for-byte in-line
-	   memset is preferred here, since this isn't speed-critical code and
-	   we'd rather have this a leaf-function than calling memset.  */
-	{
-		char *endp;
-		for (endp = dst + n; dst < endp; dst++)
-			*dst = 0;
-	}
-
-	return retn + n;
 }
-EXPORT_SYMBOL(__copy_user_zeroing);
+EXPORT_SYMBOL(raw_copy_from_user);
 
 #define __asm_clear_8x64(to, ret) \
 	asm volatile (					\
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user
@ 2017-04-04 14:46   ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag, Al Viro; +Cc: James Hogan, stable

Currently we try to zero the destination for a failed read from userland
in fixup code in the usercopy.c macros. The rest of the destination
buffer is then zeroed from __copy_user_zeroing(), which is used for both
copy_from_user() and __copy_from_user().

Unfortunately we fail to zero in the fixup code as D1Ar1 is set to 0
before the fixup code entry labels, and __copy_from_user() shouldn't even
be zeroing the rest of the buffer.

Move the zeroing out into copy_from_user() and rename
__copy_user_zeroing() to raw_copy_from_user() since it no longer does
any zeroing. This also conveniently matches the name needed for
RAW_COPY_USER support in a later patch.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
---
 arch/metag/include/asm/uaccess.h | 15 +++++-----
 arch/metag/lib/usercopy.c        | 52 +++++++++++----------------------
 2 files changed, 26 insertions(+), 41 deletions(-)

diff --git a/arch/metag/include/asm/uaccess.h b/arch/metag/include/asm/uaccess.h
index 2613d91bde1a..61805d9f9375 100644
--- a/arch/metag/include/asm/uaccess.h
+++ b/arch/metag/include/asm/uaccess.h
@@ -172,20 +172,21 @@ extern long __must_check strnlen_user(const char __user *src, long count);
 
 #define strlen_user(str) strnlen_user(str, 32767)
 
-extern unsigned long __must_check __copy_user_zeroing(void *to,
-						      const void __user *from,
-						      unsigned long n);
+extern unsigned long raw_copy_from_user(void *to, const void __user *from,
+					unsigned long n);
 
 static inline unsigned long
 copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	unsigned long res = n;
 	if (likely(access_ok(VERIFY_READ, from, n)))
-		return __copy_user_zeroing(to, from, n);
-	memset(to, 0, n);
-	return n;
+		res = raw_copy_from_user(to, from, n);
+	if (unlikely(res))
+		memset(to + (n - res), 0, res);
+	return res;
 }
 
-#define __copy_from_user(to, from, n) __copy_user_zeroing(to, from, n)
+#define __copy_from_user(to, from, n) raw_copy_from_user(to, from, n)
 #define __copy_from_user_inatomic __copy_from_user
 
 extern unsigned long __must_check __copy_user(void __user *to,
diff --git a/arch/metag/lib/usercopy.c b/arch/metag/lib/usercopy.c
index b4eb1f17069f..bb1f7d60fcc0 100644
--- a/arch/metag/lib/usercopy.c
+++ b/arch/metag/lib/usercopy.c
@@ -29,7 +29,6 @@
 		COPY						 \
 		"1:\n"						 \
 		"	.section .fixup,\"ax\"\n"		 \
-		"	MOV D1Ar1,#0\n"				 \
 		FIXUP						 \
 		"	MOVT    D1Ar1,#HI(1b)\n"		 \
 		"	JUMP    D1Ar1,#LO(1b)\n"		 \
@@ -618,7 +617,7 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETB D1Ar1,[%1++]\n"	\
 		"2:	SETB [%0++],D1Ar1\n",	\
 		"3:	ADD  %2,%2,#1\n"	\
-		"	SETB [%0++],D1Ar1\n",	\
+		"	ADD  %0,%0,#1\n",	\
 		"	.long 2b,3b\n")
 
 #define __asm_copy_from_user_2x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
@@ -626,7 +625,7 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETW D1Ar1,[%1++]\n"		\
 		"2:	SETW [%0++],D1Ar1\n" COPY,	\
 		"3:	ADD  %2,%2,#2\n"		\
-		"	SETW [%0++],D1Ar1\n" FIXUP,	\
+		"	ADD  %0,%0,#2\n" FIXUP,		\
 		"	.long 2b,3b\n" TENTRY)
 
 #define __asm_copy_from_user_2(to, from, ret) \
@@ -637,7 +636,7 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETB D1Ar1,[%1++]\n"		\
 		"4:	SETB [%0++],D1Ar1\n",		\
 		"5:	ADD  %2,%2,#1\n"		\
-		"	SETB [%0++],D1Ar1\n",		\
+		"	ADD  %0,%0,#1\n",		\
 		"	.long 4b,5b\n")
 
 #define __asm_copy_from_user_4x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
@@ -645,23 +644,20 @@ EXPORT_SYMBOL(__copy_user);
 		"	GETD D1Ar1,[%1++]\n"		\
 		"2:	SETD [%0++],D1Ar1\n" COPY,	\
 		"3:	ADD  %2,%2,#4\n"		\
-		"	SETD [%0++],D1Ar1\n" FIXUP,	\
+		"	ADD  %0,%0,#4\n" FIXUP,		\
 		"	.long 2b,3b\n" TENTRY)
 
 #define __asm_copy_from_user_4(to, from, ret) \
 	__asm_copy_from_user_4x_cont(to, from, ret, "", "", "")
 
-
 #define __asm_copy_from_user_8x64(to, from, ret) \
 	asm volatile (				\
 		"	GETL D0Ar2,D1Ar1,[%1++]\n"	\
 		"2:	SETL [%0++],D0Ar2,D1Ar1\n"	\
 		"1:\n"					\
 		"	.section .fixup,\"ax\"\n"	\
-		"	MOV D1Ar1,#0\n"			\
-		"	MOV D0Ar2,#0\n"			\
 		"3:	ADD  %2,%2,#8\n"		\
-		"	SETL [%0++],D0Ar2,D1Ar1\n"	\
+		"	ADD  %0,%0,#8\n"		\
 		"	MOVT    D0Ar2,#HI(1b)\n"	\
 		"	JUMP    D0Ar2,#LO(1b)\n"	\
 		"	.previous\n"			\
@@ -701,11 +697,12 @@ EXPORT_SYMBOL(__copy_user);
 		"SUB	%1, %1, #4\n")
 
 
-/* Copy from user to kernel, zeroing the bytes that were inaccessible in
-   userland.  The return-value is the number of bytes that were
-   inaccessible.  */
-unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
-				  unsigned long n)
+/*
+ * Copy from user to kernel. The return-value is the number of bytes that were
+ * inaccessible.
+ */
+unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
+				 unsigned long n)
 {
 	register char *dst asm ("A0.2") = pdst;
 	register const char __user *src asm ("A1.2") = psrc;
@@ -724,7 +721,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_1(dst, src, retn);
 			n--;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 	if (((unsigned long) src & 2) && n >= 2) {
@@ -737,7 +734,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_2(dst, src, retn);
 			n -= 2;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 
@@ -745,7 +742,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 	   because if both adjustments were done, either both or
 	   neither reference had an exception.  */
 	if (retn != 0)
-		goto copy_exception_bytes;
+		return retn + n;
 
 #ifdef USE_RAPF
 	/* 64 bit copy loop */
@@ -759,7 +756,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_8x64(dst, src, retn);
 			n -= 8;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 
@@ -775,7 +772,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 			__asm_copy_from_user_8x64(dst, src, retn);
 			n -= 8;
 			if (retn)
-				goto copy_exception_bytes;
+				return retn + n;
 		}
 	}
 #endif
@@ -785,7 +782,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 		n -= 4;
 
 		if (retn)
-			goto copy_exception_bytes;
+			return retn + n;
 	}
 
 	/* If we get here, there were no memory read faults.  */
@@ -811,21 +808,8 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
 	/* If we get here, retn correctly reflects the number of failing
 	   bytes.  */
 	return retn;
-
- copy_exception_bytes:
-	/* We already have "retn" bytes cleared, and need to clear the
-	   remaining "n" bytes.  A non-optimized simple byte-for-byte in-line
-	   memset is preferred here, since this isn't speed-critical code and
-	   we'd rather have this a leaf-function than calling memset.  */
-	{
-		char *endp;
-		for (endp = dst + n; dst < endp; dst++)
-			*dst = 0;
-	}
-
-	return retn + n;
 }
-EXPORT_SYMBOL(__copy_user_zeroing);
+EXPORT_SYMBOL(raw_copy_from_user);
 
 #define __asm_clear_8x64(to, ret) \
 	asm volatile (					\
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] metag/usercopy: Switch to RAW_COPY_USER
       [not found] ` <cover.67b8034e25703fa0c68295f1ac1d7abe4bd1fac9.1491316836.git-series.james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
@ 2017-04-04 14:46   ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-04 14:46 UTC (permalink / raw)
  To: linux-metag-u79uwXL29TY76Z2rM5mHXA, Al Viro; +Cc: James Hogan

Switch to using raw user copy instead of providing metag specific
[__]copy_{to,from}_user[_inatomic](). This simplifies the metag
uaccess.h and allows us to take advantage of extra checking in the
generic versions.

Signed-off-by: James Hogan <james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
Cc: Al Viro <viro-RmSDqhL/yNMiFSDQTTA3OLVCufUGDwFn@public.gmane.org>
Cc: linux-metag-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
---
 arch/metag/Kconfig               |  1 +
 arch/metag/include/asm/uaccess.h | 31 ++-----------------------------
 arch/metag/lib/usercopy.c        |  6 +++---
 3 files changed, 6 insertions(+), 32 deletions(-)

diff --git a/arch/metag/Kconfig b/arch/metag/Kconfig
index 5b7a45d99cfb..ecce0c5ec8e8 100644
--- a/arch/metag/Kconfig
+++ b/arch/metag/Kconfig
@@ -1,5 +1,6 @@
 config METAG
 	def_bool y
+	select ARCH_HAS_RAW_COPY_USER
 	select EMBEDDED
 	select GENERIC_ATOMIC64
 	select GENERIC_CLOCKEVENTS
diff --git a/arch/metag/include/asm/uaccess.h b/arch/metag/include/asm/uaccess.h
index 61805d9f9375..5ebc2850690e 100644
--- a/arch/metag/include/asm/uaccess.h
+++ b/arch/metag/include/asm/uaccess.h
@@ -174,35 +174,8 @@ extern long __must_check strnlen_user(const char __user *src, long count);
 
 extern unsigned long raw_copy_from_user(void *to, const void __user *from,
 					unsigned long n);
-
-static inline unsigned long
-copy_from_user(void *to, const void __user *from, unsigned long n)
-{
-	unsigned long res = n;
-	if (likely(access_ok(VERIFY_READ, from, n)))
-		res = raw_copy_from_user(to, from, n);
-	if (unlikely(res))
-		memset(to + (n - res), 0, res);
-	return res;
-}
-
-#define __copy_from_user(to, from, n) raw_copy_from_user(to, from, n)
-#define __copy_from_user_inatomic __copy_from_user
-
-extern unsigned long __must_check __copy_user(void __user *to,
-					      const void *from,
-					      unsigned long n);
-
-static inline unsigned long copy_to_user(void __user *to, const void *from,
-					 unsigned long n)
-{
-	if (access_ok(VERIFY_WRITE, to, n))
-		return __copy_user(to, from, n);
-	return n;
-}
-
-#define __copy_to_user(to, from, n) __copy_user(to, from, n)
-#define __copy_to_user_inatomic __copy_to_user
+extern unsigned long raw_copy_to_user(void __user *to, const void *from,
+				      unsigned long n);
 
 /*
  * Zero Userspace
diff --git a/arch/metag/lib/usercopy.c b/arch/metag/lib/usercopy.c
index bb1f7d60fcc0..9c9aa2442b70 100644
--- a/arch/metag/lib/usercopy.c
+++ b/arch/metag/lib/usercopy.c
@@ -524,8 +524,8 @@
 		"SUB	%1,	%1,	D0Ar2\n"			\
 		"SUB	%3, %3, D1Ar1\n")
 
-unsigned long __copy_user(void __user *pdst, const void *psrc,
-			  unsigned long n)
+unsigned long raw_copy_to_user(void __user *pdst, const void *psrc,
+			       unsigned long n)
 {
 	register char __user *dst asm ("A0.2") = pdst;
 	register const char *src asm ("A1.2") = psrc;
@@ -610,7 +610,7 @@ unsigned long __copy_user(void __user *pdst, const void *psrc,
 
 	return retn;
 }
-EXPORT_SYMBOL(__copy_user);
+EXPORT_SYMBOL(raw_copy_to_user);
 
 #define __asm_copy_from_user_1(to, from, ret) \
 	__asm_copy_user_cont(to, from, ret,	\
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user
@ 2017-04-04 16:31     ` Al Viro
  0 siblings, 0 replies; 11+ messages in thread
From: Al Viro @ 2017-04-04 16:31 UTC (permalink / raw)
  To: James Hogan; +Cc: linux-metag, stable

On Tue, Apr 04, 2017 at 03:46:51PM +0100, James Hogan wrote:
> @@ -618,7 +617,7 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETB D1Ar1,[%1++]\n"	\
>  		"2:	SETB [%0++],D1Ar1\n",	\
>  		"3:	ADD  %2,%2,#1\n"	\
> -		"	SETB [%0++],D1Ar1\n",	\
> +		"	ADD  %0,%0,#1\n",	\

Why bother incrementing dst here?

>  		"	.long 2b,3b\n")
>  
>  #define __asm_copy_from_user_2x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
> @@ -626,7 +625,7 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETW D1Ar1,[%1++]\n"		\
>  		"2:	SETW [%0++],D1Ar1\n" COPY,	\
>  		"3:	ADD  %2,%2,#2\n"		\
> -		"	SETW [%0++],D1Ar1\n" FIXUP,	\
> +		"	ADD  %0,%0,#2\n" FIXUP,		\
>  		"	.long 2b,3b\n" TENTRY)
>  
>  #define __asm_copy_from_user_2(to, from, ret) \
> @@ -637,7 +636,7 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETB D1Ar1,[%1++]\n"		\
>  		"4:	SETB [%0++],D1Ar1\n",		\
>  		"5:	ADD  %2,%2,#1\n"		\
> -		"	SETB [%0++],D1Ar1\n",		\
> +		"	ADD  %0,%0,#1\n",		\
>  		"	.long 4b,5b\n")
>  
>  #define __asm_copy_from_user_4x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
> @@ -645,23 +644,20 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETD D1Ar1,[%1++]\n"		\
>  		"2:	SETD [%0++],D1Ar1\n" COPY,	\
>  		"3:	ADD  %2,%2,#4\n"		\
> -		"	SETD [%0++],D1Ar1\n" FIXUP,	\
> +		"	ADD  %0,%0,#4\n" FIXUP,		\
>  		"	.long 2b,3b\n" TENTRY)
>
>  #define __asm_copy_from_user_8x64(to, from, ret) \
>  	asm volatile (				\
>  		"	GETL D0Ar2,D1Ar1,[%1++]\n"	\
>  		"2:	SETL [%0++],D0Ar2,D1Ar1\n"	\
>  		"1:\n"					\
>  		"	.section .fixup,\"ax\"\n"	\
> -		"	MOV D1Ar1,#0\n"			\
> -		"	MOV D0Ar2,#0\n"			\
>  		"3:	ADD  %2,%2,#8\n"		\
> -		"	SETL [%0++],D0Ar2,D1Ar1\n"	\
> +		"	ADD  %0,%0,#8\n"		\
>  		"	MOVT    D0Ar2,#HI(1b)\n"	\
>  		"	JUMP    D0Ar2,#LO(1b)\n"	\
>  		"	.previous\n"			\


Ditto for all of those, actually.

> +/*
> + * Copy from user to kernel. The return-value is the number of bytes that were
> + * inaccessible.
> + */
> +unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
> +				 unsigned long n)
>  {
>  	register char *dst asm ("A0.2") = pdst;
>  	register const char __user *src asm ("A1.2") = psrc;
> @@ -724,7 +721,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
>  			__asm_copy_from_user_1(dst, src, retn);
>  			n--;
>  			if (retn)
> -				goto copy_exception_bytes;
> +				return retn + n;
>  		}
>  	}

Umm...  What happens if psrc points to the last byte of an unmapped page,
with psrc + 1 pointing to valid data?  AFAICS, you'll get GETB fail, have
retn increased to 1, n decremented by 1, then, assuming you end up in that
byte-copy loop, have GETB at src + 1 *succeed*, the value fetched fed to SETB
at dst + 1, n decremented again, non-zero retn finally spotted and n + retn
returned, reporting that you've copied a single byte.  Which you certainly
have, but it's pdst[1] = psrc[1], not pdst[0] = psrc[0].

IOW, you need to check retn after all __asm_copy_from_user_{1,2} in the
beginning.

As for topic branch for #1 and #2 - sure, perfectly fine by me.  Just give
me the branch name and I'll pull.  But I think the scenario above needs to
be dealt with...

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user
@ 2017-04-04 16:31     ` Al Viro
  0 siblings, 0 replies; 11+ messages in thread
From: Al Viro @ 2017-04-04 16:31 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-metag-u79uwXL29TY76Z2rM5mHXA, stable-u79uwXL29TY76Z2rM5mHXA

On Tue, Apr 04, 2017 at 03:46:51PM +0100, James Hogan wrote:
> @@ -618,7 +617,7 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETB D1Ar1,[%1++]\n"	\
>  		"2:	SETB [%0++],D1Ar1\n",	\
>  		"3:	ADD  %2,%2,#1\n"	\
> -		"	SETB [%0++],D1Ar1\n",	\
> +		"	ADD  %0,%0,#1\n",	\

Why bother incrementing dst here?

>  		"	.long 2b,3b\n")
>  
>  #define __asm_copy_from_user_2x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
> @@ -626,7 +625,7 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETW D1Ar1,[%1++]\n"		\
>  		"2:	SETW [%0++],D1Ar1\n" COPY,	\
>  		"3:	ADD  %2,%2,#2\n"		\
> -		"	SETW [%0++],D1Ar1\n" FIXUP,	\
> +		"	ADD  %0,%0,#2\n" FIXUP,		\
>  		"	.long 2b,3b\n" TENTRY)
>  
>  #define __asm_copy_from_user_2(to, from, ret) \
> @@ -637,7 +636,7 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETB D1Ar1,[%1++]\n"		\
>  		"4:	SETB [%0++],D1Ar1\n",		\
>  		"5:	ADD  %2,%2,#1\n"		\
> -		"	SETB [%0++],D1Ar1\n",		\
> +		"	ADD  %0,%0,#1\n",		\
>  		"	.long 4b,5b\n")
>  
>  #define __asm_copy_from_user_4x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
> @@ -645,23 +644,20 @@ EXPORT_SYMBOL(__copy_user);
>  		"	GETD D1Ar1,[%1++]\n"		\
>  		"2:	SETD [%0++],D1Ar1\n" COPY,	\
>  		"3:	ADD  %2,%2,#4\n"		\
> -		"	SETD [%0++],D1Ar1\n" FIXUP,	\
> +		"	ADD  %0,%0,#4\n" FIXUP,		\
>  		"	.long 2b,3b\n" TENTRY)
>
>  #define __asm_copy_from_user_8x64(to, from, ret) \
>  	asm volatile (				\
>  		"	GETL D0Ar2,D1Ar1,[%1++]\n"	\
>  		"2:	SETL [%0++],D0Ar2,D1Ar1\n"	\
>  		"1:\n"					\
>  		"	.section .fixup,\"ax\"\n"	\
> -		"	MOV D1Ar1,#0\n"			\
> -		"	MOV D0Ar2,#0\n"			\
>  		"3:	ADD  %2,%2,#8\n"		\
> -		"	SETL [%0++],D0Ar2,D1Ar1\n"	\
> +		"	ADD  %0,%0,#8\n"		\
>  		"	MOVT    D0Ar2,#HI(1b)\n"	\
>  		"	JUMP    D0Ar2,#LO(1b)\n"	\
>  		"	.previous\n"			\


Ditto for all of those, actually.

> +/*
> + * Copy from user to kernel. The return-value is the number of bytes that were
> + * inaccessible.
> + */
> +unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
> +				 unsigned long n)
>  {
>  	register char *dst asm ("A0.2") = pdst;
>  	register const char __user *src asm ("A1.2") = psrc;
> @@ -724,7 +721,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
>  			__asm_copy_from_user_1(dst, src, retn);
>  			n--;
>  			if (retn)
> -				goto copy_exception_bytes;
> +				return retn + n;
>  		}
>  	}

Umm...  What happens if psrc points to the last byte of an unmapped page,
with psrc + 1 pointing to valid data?  AFAICS, you'll get GETB fail, have
retn increased to 1, n decremented by 1, then, assuming you end up in that
byte-copy loop, have GETB at src + 1 *succeed*, the value fetched fed to SETB
at dst + 1, n decremented again, non-zero retn finally spotted and n + retn
returned, reporting that you've copied a single byte.  Which you certainly
have, but it's pdst[1] = psrc[1], not pdst[0] = psrc[0].

IOW, you need to check retn after all __asm_copy_from_user_{1,2} in the
beginning.

As for topic branch for #1 and #2 - sure, perfectly fine by me.  Just give
me the branch name and I'll pull.  But I think the scenario above needs to
be dealt with...

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user
@ 2017-04-05  6:54       ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-05  6:54 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-metag, stable

[-- Attachment #1: Type: text/plain, Size: 2261 bytes --]

On Tue, Apr 04, 2017 at 05:31:57PM +0100, Al Viro wrote:
> On Tue, Apr 04, 2017 at 03:46:51PM +0100, James Hogan wrote:
> > @@ -618,7 +617,7 @@ EXPORT_SYMBOL(__copy_user);
> >  		"	GETB D1Ar1,[%1++]\n"	\
> >  		"2:	SETB [%0++],D1Ar1\n",	\
> >  		"3:	ADD  %2,%2,#1\n"	\
> > -		"	SETB [%0++],D1Ar1\n",	\
> > +		"	ADD  %0,%0,#1\n",	\
> 
> Why bother incrementing dst here?

Mainly for consistency with the current expectations of these macros
(otherwise we'd break the unaligned handling of valid copies), but I
think I agree that if the issue mentioned below if fixed before this
patch we could remove these adds, since it'll always result in an
immediate return based on n and retn (and not dst).

> > +/*
> > + * Copy from user to kernel. The return-value is the number of bytes that were
> > + * inaccessible.
> > + */
> > +unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
> > +				 unsigned long n)
> >  {
> >  	register char *dst asm ("A0.2") = pdst;
> >  	register const char __user *src asm ("A1.2") = psrc;
> > @@ -724,7 +721,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
> >  			__asm_copy_from_user_1(dst, src, retn);
> >  			n--;
> >  			if (retn)
> > -				goto copy_exception_bytes;
> > +				return retn + n;
> >  		}
> >  	}
> 
> Umm...  What happens if psrc points to the last byte of an unmapped page,
> with psrc + 1 pointing to valid data?  AFAICS, you'll get GETB fail, have
> retn increased to 1, n decremented by 1, then, assuming you end up in that
> byte-copy loop, have GETB at src + 1 *succeed*, the value fetched fed to SETB
> at dst + 1, n decremented again, non-zero retn finally spotted and n + retn
> returned, reporting that you've copied a single byte.  Which you certainly
> have, but it's pdst[1] = psrc[1], not pdst[0] = psrc[0].
> 
> IOW, you need to check retn after all __asm_copy_from_user_{1,2} in the
> beginning.

Agreed. This is one of the other fixes I mentioned. I'll rearrange to
put that fix first.

> As for topic branch for #1 and #2 - sure, perfectly fine by me.  Just give
> me the branch name and I'll pull.  But I think the scenario above needs to
> be dealt with...

Thanks for reviewing

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user
@ 2017-04-05  6:54       ` James Hogan
  0 siblings, 0 replies; 11+ messages in thread
From: James Hogan @ 2017-04-05  6:54 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-metag-u79uwXL29TY76Z2rM5mHXA, stable-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 2261 bytes --]

On Tue, Apr 04, 2017 at 05:31:57PM +0100, Al Viro wrote:
> On Tue, Apr 04, 2017 at 03:46:51PM +0100, James Hogan wrote:
> > @@ -618,7 +617,7 @@ EXPORT_SYMBOL(__copy_user);
> >  		"	GETB D1Ar1,[%1++]\n"	\
> >  		"2:	SETB [%0++],D1Ar1\n",	\
> >  		"3:	ADD  %2,%2,#1\n"	\
> > -		"	SETB [%0++],D1Ar1\n",	\
> > +		"	ADD  %0,%0,#1\n",	\
> 
> Why bother incrementing dst here?

Mainly for consistency with the current expectations of these macros
(otherwise we'd break the unaligned handling of valid copies), but I
think I agree that if the issue mentioned below if fixed before this
patch we could remove these adds, since it'll always result in an
immediate return based on n and retn (and not dst).

> > +/*
> > + * Copy from user to kernel. The return-value is the number of bytes that were
> > + * inaccessible.
> > + */
> > +unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
> > +				 unsigned long n)
> >  {
> >  	register char *dst asm ("A0.2") = pdst;
> >  	register const char __user *src asm ("A1.2") = psrc;
> > @@ -724,7 +721,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
> >  			__asm_copy_from_user_1(dst, src, retn);
> >  			n--;
> >  			if (retn)
> > -				goto copy_exception_bytes;
> > +				return retn + n;
> >  		}
> >  	}
> 
> Umm...  What happens if psrc points to the last byte of an unmapped page,
> with psrc + 1 pointing to valid data?  AFAICS, you'll get GETB fail, have
> retn increased to 1, n decremented by 1, then, assuming you end up in that
> byte-copy loop, have GETB at src + 1 *succeed*, the value fetched fed to SETB
> at dst + 1, n decremented again, non-zero retn finally spotted and n + retn
> returned, reporting that you've copied a single byte.  Which you certainly
> have, but it's pdst[1] = psrc[1], not pdst[0] = psrc[0].
> 
> IOW, you need to check retn after all __asm_copy_from_user_{1,2} in the
> beginning.

Agreed. This is one of the other fixes I mentioned. I'll rearrange to
put that fix first.

> As for topic branch for #1 and #2 - sure, perfectly fine by me.  Just give
> me the branch name and I'll pull.  But I think the scenario above needs to
> be dealt with...

Thanks for reviewing

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-04-05  6:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-04 14:46 [PATCH 0/3] metag/usercopy: Support RAW_COPY_USER James Hogan
2017-04-04 14:46 ` James Hogan
2017-04-04 14:46 ` [PATCH 1/3] metag/usercopy: Drop unused macros James Hogan
2017-04-04 14:46   ` James Hogan
2017-04-04 14:46 ` [PATCH 2/3] metag/usercopy: Zero rest of buffer from copy_from_user James Hogan
2017-04-04 14:46   ` James Hogan
2017-04-04 16:31   ` Al Viro
2017-04-04 16:31     ` Al Viro
2017-04-05  6:54     ` James Hogan
2017-04-05  6:54       ` James Hogan
     [not found] ` <cover.67b8034e25703fa0c68295f1ac1d7abe4bd1fac9.1491316836.git-series.james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
2017-04-04 14:46   ` [PATCH 3/3] metag/usercopy: Switch to RAW_COPY_USER James Hogan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.