All of lore.kernel.org
 help / color / mirror / Atom feed
* x86: Static optimisations for copy_user
@ 2017-06-01  6:58 ` Chris Wilson
  0 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: x86, intel-gfx

I was looking at the overhead of drmIoctl() in a microbenchmark that
repeatedly did a copy_from_user(.size=8) followed by a
copy_to_user(.size=8) as part of the DRM_IOCTL_I915_GEM_BUSY. I found
that if I forced inlined the get_user/put_user instead the walltime of
the ioctl was improved by about 20%. If copy_user_generic_unrolled was
used instead of copy_user_enhanced_fast_string, performance of the
microbenchmark was improved by 10%. Benchmarking on a few machines

(Broadwell)
 benchmark_copy_user(hot):
       size   unrolled     string fast-string
          1        158         77         79
          2        306        154        158
          4        614        308        317
          6        926        462        476
          8       1344        298        635
         12       1773        482        952
         16       2797        602       1269
         24       4020        903       1906
         32       5055       1204       2540
         48       6150       1806       3810
         64       9564       2409       5082
         96      13583       3612       6483
        128      18108       4815       8434

(Broxton)
 benchmark_copy_user(hot):
       size   unrolled     string fast-string
          1        270         52         53
          2        364        106        109
          4        460        213        218
          6        486        305        312
          8       1250        253        437
         12       1009        332        625
         16       2059        514        897
         24       2624        672       1071
         32       3043       1014       1750
         48       3620       1499       2561
         64       7777       1971       3333
         96       7499       2876       4772
        128       9999       3733       6088

which says that for this cache hot case in benchmarking the rep mov
microcode noticeably underperforms. Though once we pass a few
cachelines, and definitely after exceeding L1 cache, rep mov is the
clear winner. From cold, there is no difference in timings.

I can improve the microbenchmark by either force inlining the
raw_copy_*_user switches, or by switching to copy_user_generic_unrolled.
Both leave a sour taste. The switch is too big to be inlined, and if
called out-of-line the function call overhead negates its benefits.
Switching between fast-string and unrolled makes a presumption on
behaviour.

In the end, I limited this series to just adding a few extra
translations for statically known copy_*_user().
-Chris

^ permalink raw reply	[flat|nested] 8+ messages in thread

* x86: Static optimisations for copy_user
@ 2017-06-01  6:58 ` Chris Wilson
  0 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: intel-gfx, x86

I was looking at the overhead of drmIoctl() in a microbenchmark that
repeatedly did a copy_from_user(.size=8) followed by a
copy_to_user(.size=8) as part of the DRM_IOCTL_I915_GEM_BUSY. I found
that if I forced inlined the get_user/put_user instead the walltime of
the ioctl was improved by about 20%. If copy_user_generic_unrolled was
used instead of copy_user_enhanced_fast_string, performance of the
microbenchmark was improved by 10%. Benchmarking on a few machines

(Broadwell)
 benchmark_copy_user(hot):
       size   unrolled     string fast-string
          1        158         77         79
          2        306        154        158
          4        614        308        317
          6        926        462        476
          8       1344        298        635
         12       1773        482        952
         16       2797        602       1269
         24       4020        903       1906
         32       5055       1204       2540
         48       6150       1806       3810
         64       9564       2409       5082
         96      13583       3612       6483
        128      18108       4815       8434

(Broxton)
 benchmark_copy_user(hot):
       size   unrolled     string fast-string
          1        270         52         53
          2        364        106        109
          4        460        213        218
          6        486        305        312
          8       1250        253        437
         12       1009        332        625
         16       2059        514        897
         24       2624        672       1071
         32       3043       1014       1750
         48       3620       1499       2561
         64       7777       1971       3333
         96       7499       2876       4772
        128       9999       3733       6088

which says that for this cache hot case in benchmarking the rep mov
microcode noticeably underperforms. Though once we pass a few
cachelines, and definitely after exceeding L1 cache, rep mov is the
clear winner. From cold, there is no difference in timings.

I can improve the microbenchmark by either force inlining the
raw_copy_*_user switches, or by switching to copy_user_generic_unrolled.
Both leave a sour taste. The switch is too big to be inlined, and if
called out-of-line the function call overhead negates its benefits.
Switching between fast-string and unrolled makes a presumption on
behaviour.

In the end, I limited this series to just adding a few extra
translations for statically known copy_*_user().
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] x86-32: Teach copy_from_user to unroll .size=6/8
  2017-06-01  6:58 ` Chris Wilson
@ 2017-06-01  6:58   ` Chris Wilson
  -1 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, intel-gfx, Chris Wilson, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin

Two exception handling register moves are faster to inline than a call
to __copy_user_ll(). We already apply the conversion for a get_user()
call, so for symmetry we should also apply the optimisation to
copy_from_user.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/uaccess_32.h | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index aeda9bb8af50..44d17d1ab07c 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -23,30 +23,47 @@ static __always_inline unsigned long
 raw_copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (__builtin_constant_p(n)) {
-		unsigned long ret;
+		unsigned long ret = 0;
 
 		switch (n) {
 		case 1:
-			ret = 0;
 			__uaccess_begin();
 			__get_user_asm_nozero(*(u8 *)to, from, ret,
 					      "b", "b", "=q", 1);
 			__uaccess_end();
 			return ret;
 		case 2:
-			ret = 0;
 			__uaccess_begin();
 			__get_user_asm_nozero(*(u16 *)to, from, ret,
 					      "w", "w", "=r", 2);
 			__uaccess_end();
 			return ret;
 		case 4:
-			ret = 0;
 			__uaccess_begin();
 			__get_user_asm_nozero(*(u32 *)to, from, ret,
 					      "l", "k", "=r", 4);
 			__uaccess_end();
 			return ret;
+		case 6:
+			__uaccess_begin();
+			__get_user_asm_nozero(*(u32 *)to, from, ret,
+					      "l", "k", "=r", 6);
+			if (likely(!ret))
+				__get_user_asm_nozero(*(u16 *)(4 + (char *)to),
+						      (u16 __user *)(4 + (char __user *)from),
+						      ret, "w", "w", "=r", 2);
+			__uaccess_end();
+			return ret;
+		case 8:
+			__uaccess_begin();
+			__get_user_asm_nozero(*(u32 *)to, from, ret,
+					      "l", "k", "=r", 8);
+			if (likely(!ret))
+				__get_user_asm_nozero(*(u32 *)(4 + (char *)to),
+						      (u32 __user *)(4 + (char __user *)from),
+						      ret, "l", "k", "=r", 4);
+			__uaccess_end();
+			return ret;
 		}
 	}
 	return __copy_user_ll(to, (__force const void *)from, n);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 1/3] x86-32: Teach copy_from_user to unroll .size=6/8
@ 2017-06-01  6:58   ` Chris Wilson
  0 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: intel-gfx, x86, Ingo Molnar, H. Peter Anvin, Thomas Gleixner

Two exception handling register moves are faster to inline than a call
to __copy_user_ll(). We already apply the conversion for a get_user()
call, so for symmetry we should also apply the optimisation to
copy_from_user.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/uaccess_32.h | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index aeda9bb8af50..44d17d1ab07c 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -23,30 +23,47 @@ static __always_inline unsigned long
 raw_copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (__builtin_constant_p(n)) {
-		unsigned long ret;
+		unsigned long ret = 0;
 
 		switch (n) {
 		case 1:
-			ret = 0;
 			__uaccess_begin();
 			__get_user_asm_nozero(*(u8 *)to, from, ret,
 					      "b", "b", "=q", 1);
 			__uaccess_end();
 			return ret;
 		case 2:
-			ret = 0;
 			__uaccess_begin();
 			__get_user_asm_nozero(*(u16 *)to, from, ret,
 					      "w", "w", "=r", 2);
 			__uaccess_end();
 			return ret;
 		case 4:
-			ret = 0;
 			__uaccess_begin();
 			__get_user_asm_nozero(*(u32 *)to, from, ret,
 					      "l", "k", "=r", 4);
 			__uaccess_end();
 			return ret;
+		case 6:
+			__uaccess_begin();
+			__get_user_asm_nozero(*(u32 *)to, from, ret,
+					      "l", "k", "=r", 6);
+			if (likely(!ret))
+				__get_user_asm_nozero(*(u16 *)(4 + (char *)to),
+						      (u16 __user *)(4 + (char __user *)from),
+						      ret, "w", "w", "=r", 2);
+			__uaccess_end();
+			return ret;
+		case 8:
+			__uaccess_begin();
+			__get_user_asm_nozero(*(u32 *)to, from, ret,
+					      "l", "k", "=r", 8);
+			if (likely(!ret))
+				__get_user_asm_nozero(*(u32 *)(4 + (char *)to),
+						      (u32 __user *)(4 + (char __user *)from),
+						      ret, "l", "k", "=r", 4);
+			__uaccess_end();
+			return ret;
 		}
 	}
 	return __copy_user_ll(to, (__force const void *)from, n);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] x86-32: Expand static copy_to_user()
  2017-06-01  6:58 ` Chris Wilson
@ 2017-06-01  6:58   ` Chris Wilson
  -1 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, intel-gfx, Chris Wilson, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin

For known compile-time fixed sizes, teach x86-32 copy_to_user() to
convert them to the simpler put_user and inline it similar to the
optimisation applied to copy_from_user() and already used by x86-64.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/uaccess_32.h | 48 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 44d17d1ab07c..a02aa9db34ed 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -16,6 +16,54 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 raw_copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (__builtin_constant_p(n)) {
+		unsigned long ret = 0;
+
+		switch (n) {
+		case 1:
+			__uaccess_begin();
+			__put_user_asm(*(u8 *)from, to, ret,
+					"b", "b", "iq", 1);
+			__uaccess_end();
+			return ret;
+		case 2:
+			__uaccess_begin();
+			__put_user_asm(*(u16 *)from, to, ret,
+					"w", "w", "ir", 2);
+			__uaccess_end();
+			return ret;
+		case 4:
+			__uaccess_begin();
+			__put_user_asm(*(u32 *)from, to, ret,
+					"l", "k", "ir", 4);
+			__uaccess_end();
+			return ret;
+		case 6:
+			__uaccess_begin();
+			__put_user_asm(*(u32 *)from, to, ret,
+					"l", "k", "ir", 4);
+			if (likely(!ret)) {
+				asm("":::"memory");
+				__put_user_asm(*(u16 *)(4 + (char *)from),
+						(u16 __user *)(4 + (char __user *)to),
+						ret, "w", "w", "ir", 2);
+			}
+			__uaccess_end();
+			return ret;
+		case 8:
+			__uaccess_begin();
+			__put_user_asm(*(u32 *)from, to, ret,
+					"l", "k", "ir", 4);
+			if (likely(!ret)) {
+				asm("":::"memory");
+				__put_user_asm(*(u32 *)(4 + (char *)from),
+						(u32 __user *)(4 + (char __user *)to),
+						ret, "l", "k", "ir", 4);
+			}
+			__uaccess_end();
+			return ret;
+		}
+	}
 	return __copy_user_ll((__force void *)to, from, n);
 }
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] x86-32: Expand static copy_to_user()
@ 2017-06-01  6:58   ` Chris Wilson
  0 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: intel-gfx, x86, Ingo Molnar, H. Peter Anvin, Thomas Gleixner

For known compile-time fixed sizes, teach x86-32 copy_to_user() to
convert them to the simpler put_user and inline it similar to the
optimisation applied to copy_from_user() and already used by x86-64.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/uaccess_32.h | 48 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 44d17d1ab07c..a02aa9db34ed 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -16,6 +16,54 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 raw_copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (__builtin_constant_p(n)) {
+		unsigned long ret = 0;
+
+		switch (n) {
+		case 1:
+			__uaccess_begin();
+			__put_user_asm(*(u8 *)from, to, ret,
+					"b", "b", "iq", 1);
+			__uaccess_end();
+			return ret;
+		case 2:
+			__uaccess_begin();
+			__put_user_asm(*(u16 *)from, to, ret,
+					"w", "w", "ir", 2);
+			__uaccess_end();
+			return ret;
+		case 4:
+			__uaccess_begin();
+			__put_user_asm(*(u32 *)from, to, ret,
+					"l", "k", "ir", 4);
+			__uaccess_end();
+			return ret;
+		case 6:
+			__uaccess_begin();
+			__put_user_asm(*(u32 *)from, to, ret,
+					"l", "k", "ir", 4);
+			if (likely(!ret)) {
+				asm("":::"memory");
+				__put_user_asm(*(u16 *)(4 + (char *)from),
+						(u16 __user *)(4 + (char __user *)to),
+						ret, "w", "w", "ir", 2);
+			}
+			__uaccess_end();
+			return ret;
+		case 8:
+			__uaccess_begin();
+			__put_user_asm(*(u32 *)from, to, ret,
+					"l", "k", "ir", 4);
+			if (likely(!ret)) {
+				asm("":::"memory");
+				__put_user_asm(*(u32 *)(4 + (char *)from),
+						(u32 __user *)(4 + (char __user *)to),
+						ret, "l", "k", "ir", 4);
+			}
+			__uaccess_end();
+			return ret;
+		}
+	}
 	return __copy_user_ll((__force void *)to, from, n);
 }
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] x86-64: Inline 6/12 byte copy_user
  2017-06-01  6:58 ` Chris Wilson
                   ` (2 preceding siblings ...)
  (?)
@ 2017-06-01  6:58 ` Chris Wilson
  -1 siblings, 0 replies; 8+ messages in thread
From: Chris Wilson @ 2017-06-01  6:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, intel-gfx, Chris Wilson, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin

Extend the list of replacements for compile-time known sizes to include
6/12 byte copies. These expand to two movs (along with their exception
table) and are cheaper to inline than the function call (similar to the
10 byte copy already handled).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/uaccess_64.h | 42 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index c5504b9a472e..ff2d65baa988 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -71,6 +71,16 @@ raw_copy_from_user(void *dst, const void __user *src, unsigned long size)
 			      ret, "l", "k", "=r", 4);
 		__uaccess_end();
 		return ret;
+	case 6:
+		__uaccess_begin();
+		__get_user_asm_nozero(*(u32 *)dst, (u32 __user *)src,
+			       ret, "l", "k", "=r", 6);
+		if (likely(!ret))
+			__get_user_asm_nozero(*(u16 *)(4 + (char *)dst),
+				       (u16 __user *)(4 + (char __user *)src),
+				       ret, "w", "w", "=r", 2);
+		__uaccess_end();
+		return ret;
 	case 8:
 		__uaccess_begin();
 		__get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src,
@@ -87,6 +97,16 @@ raw_copy_from_user(void *dst, const void __user *src, unsigned long size)
 				       ret, "w", "w", "=r", 2);
 		__uaccess_end();
 		return ret;
+	case 12:
+		__uaccess_begin();
+		__get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src,
+			       ret, "q", "", "=r", 10);
+		if (likely(!ret))
+			__get_user_asm_nozero(*(u32 *)(8 + (char *)dst),
+				       (u32 __user *)(8 + (char __user *)src),
+				       ret, "l", "k", "=r", 4);
+		__uaccess_end();
+		return ret;
 	case 16:
 		__uaccess_begin();
 		__get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src,
@@ -128,6 +148,17 @@ raw_copy_to_user(void __user *dst, const void *src, unsigned long size)
 			      ret, "l", "k", "ir", 4);
 		__uaccess_end();
 		return ret;
+	case 6:
+		__uaccess_begin();
+		__put_user_asm(*(u32 *)src, (u32 __user *)dst,
+			       ret, "l", "k", "ir", 6);
+		if (likely(!ret)) {
+			asm("":::"memory");
+			__put_user_asm(2[(u16 *)src], 2 + (u16 __user *)dst,
+				       ret, "w", "w", "ir", 2);
+		}
+		__uaccess_end();
+		return ret;
 	case 8:
 		__uaccess_begin();
 		__put_user_asm(*(u64 *)src, (u64 __user *)dst,
@@ -145,6 +176,17 @@ raw_copy_to_user(void __user *dst, const void *src, unsigned long size)
 		}
 		__uaccess_end();
 		return ret;
+	case 12:
+		__uaccess_begin();
+		__put_user_asm(*(u64 *)src, (u64 __user *)dst,
+			       ret, "q", "", "er", 12);
+		if (likely(!ret)) {
+			asm("":::"memory");
+			__put_user_asm(2[(u32 *)src], 2 + (u32 __user *)dst,
+				       ret, "l", "k", "ir", 4);
+		}
+		__uaccess_end();
+		return ret;
 	case 16:
 		__uaccess_begin();
 		__put_user_asm(*(u64 *)src, (u64 __user *)dst,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [1/3] x86-32: Teach copy_from_user to unroll .size=6/8
  2017-06-01  6:58 ` Chris Wilson
                   ` (3 preceding siblings ...)
  (?)
@ 2017-06-01  7:17 ` Patchwork
  -1 siblings, 0 replies; 8+ messages in thread
From: Patchwork @ 2017-06-01  7:17 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] x86-32: Teach copy_from_user to unroll .size=6/8
URL   : https://patchwork.freedesktop.org/series/25148/
State : success

== Summary ==

Series 25148v1 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/25148/revisions/1/mbox/

Test kms_busy:
        Subgroup basic-flip-default-a:
                dmesg-warn -> PASS       (fi-skl-6700hq) fdo#101144 +3
Test kms_cursor_legacy:
        Subgroup basic-busy-flip-before-cursor-atomic:
                fail       -> PASS       (fi-skl-6700hq) fdo#101154 +7

fdo#101144 https://bugs.freedesktop.org/show_bug.cgi?id=101144
fdo#101154 https://bugs.freedesktop.org/show_bug.cgi?id=101154

fi-bdw-5557u     total:278  pass:267  dwarn:0   dfail:0   fail:0   skip:11  time:453s
fi-bdw-gvtdvm    total:278  pass:256  dwarn:8   dfail:0   fail:0   skip:14  time:432s
fi-bsw-n3050     total:278  pass:242  dwarn:0   dfail:0   fail:0   skip:36  time:568s
fi-bxt-j4205     total:278  pass:259  dwarn:0   dfail:0   fail:0   skip:19  time:508s
fi-byt-j1900     total:278  pass:254  dwarn:0   dfail:0   fail:0   skip:24  time:488s
fi-byt-n2820     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:482s
fi-hsw-4770      total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:437s
fi-hsw-4770r     total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:415s
fi-ilk-650       total:278  pass:228  dwarn:0   dfail:0   fail:0   skip:50  time:415s
fi-ivb-3520m     total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:496s
fi-ivb-3770      total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:466s
fi-kbl-7500u     total:278  pass:255  dwarn:5   dfail:0   fail:0   skip:18  time:465s
fi-kbl-7560u     total:278  pass:263  dwarn:5   dfail:0   fail:0   skip:10  time:572s
fi-skl-6260u     total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:455s
fi-skl-6700hq    total:278  pass:239  dwarn:0   dfail:1   fail:17  skip:21  time:430s
fi-skl-6700k     total:278  pass:256  dwarn:4   dfail:0   fail:0   skip:18  time:469s
fi-skl-6770hq    total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:506s
fi-skl-gvtdvm    total:278  pass:265  dwarn:0   dfail:0   fail:0   skip:13  time:436s
fi-snb-2520m     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:531s
fi-snb-2600      total:278  pass:249  dwarn:0   dfail:0   fail:0   skip:29  time:403s

923422469cbe33b47269502ce79caa6c307f41ad drm-tip: 2017y-06m-01d-06h-07m-33s UTC integration manifest
471d3b9 x86-64: Inline 6/12 byte copy_user
b8b7388 x86-32: Expand static copy_to_user()
5966f1a x86-32: Teach copy_from_user to unroll .size=6/8

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_4852/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-06-01  7:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-01  6:58 x86: Static optimisations for copy_user Chris Wilson
2017-06-01  6:58 ` Chris Wilson
2017-06-01  6:58 ` [PATCH 1/3] x86-32: Teach copy_from_user to unroll .size=6/8 Chris Wilson
2017-06-01  6:58   ` Chris Wilson
2017-06-01  6:58 ` [PATCH 2/3] x86-32: Expand static copy_to_user() Chris Wilson
2017-06-01  6:58   ` Chris Wilson
2017-06-01  6:58 ` [PATCH 3/3] x86-64: Inline 6/12 byte copy_user Chris Wilson
2017-06-01  7:17 ` ✓ Fi.CI.BAT: success for series starting with [1/3] x86-32: Teach copy_from_user to unroll .size=6/8 Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.