linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] X86/Mem: Use string copy operation to optimze copy in kernel compression
@ 2010-09-26  9:12 yakui.zhao
  0 siblings, 0 replies; 2+ messages in thread
From: yakui.zhao @ 2010-09-26  9:12 UTC (permalink / raw)
  To: hpa; +Cc: linux-kernel, Zhao Yakui

From: Zhao Yakui <yakui.zhao@intel.com>

It will parse the elf and then copy them to the corresponding destination after
the kernel decompression is finished. And now it uses the slow byte-copy mode.
How about using the string copy operation to accelerate the copy speed in
course of kernel compression?(It is orignated from the arch/x86/lib/memcpy_32.c)

In the test the copy performance can be improved very significantly after using
the string copy operation mechanism.
        1. The copy time can be reduced from 150ms to 20ms on one atom machine
	2. The copy time can be reduced about 80% on another machine
		The time is reduced from 7ms to 1.5ms when using 32-bit kernel.
		The time is reduced from 10ms to 2ms when using 64-bit kernel.

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
---
 arch/x86/boot/compressed/misc.c |   35 +++++++++++++++++++++++++++++------
 1 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 8f7bef8..34793ae 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -229,18 +229,41 @@ void *memset(void *s, int c, size_t n)
 		ss[i] = c;
 	return s;
 }
-
+#ifdef CONFIG_X86_32
 void *memcpy(void *dest, const void *src, size_t n)
 {
-	int i;
-	const char *s = src;
-	char *d = dest;
+	int d0, d1, d2;
+	asm volatile(
+		"rep ; movsl\n\t"
+		"movl %4,%%ecx\n\t"
+		"andl $3,%%ecx\n\t"
+		"jz 1f\n\t"
+		"rep ; movsb\n\t"
+		"1:"
+		: "=&c" (d0), "=&D" (d1), "=&S" (d2)
+		: "0" (n / 4), "g" (n), "1" ((long)dest), "2" ((long)src)
+		: "memory");
 
-	for (i = 0; i < n; i++)
-		d[i] = s[i];
 	return dest;
 }
+#else
+void *memcpy(void *dest, const void *src, size_t n)
+{
+	long d0, d1, d2;
+	asm volatile(
+		"rep ; movsq\n\t"
+		"movq %4,%%rcx\n\t"
+		"andq $7,%%rcx\n\t"
+		"jz 1f\n\t"
+		"rep ; movsb\n\t"
+		"1:"
+		: "=&c" (d0), "=&D" (d1), "=&S" (d2)
+		: "0" (n / 8), "g" (n), "1" ((long)dest), "2" ((long)src)
+		: "memory");
 
+	return dest;
+}
+#endif
 
 static void error(char *x)
 {
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [PATCH] X86/Mem: Use string copy operation to optimze copy in kernel compression
@ 2010-10-08  1:47 yakui.zhao
  0 siblings, 0 replies; 2+ messages in thread
From: yakui.zhao @ 2010-10-08  1:47 UTC (permalink / raw)
  To: hpa; +Cc: linux-kernel, Zhao Yakui

From: Zhao Yakui <yakui.zhao@intel.com>

It will parse the elf and then copy them to the corresponding destination after
the kernel decompression is finished. And now it uses the slow byte-copy mode.
How about using the string copy operation to accelerate the copy speed in
course of kernel compression?(It is orignated from the arch/x86/lib/memcpy_32.c)

In the test the copy performance can be improved very significantly after using
the string copy operation mechanism.
        1. The copy time can be reduced from 150ms to 20ms on one atom machine
	2. The copy time can be reduced about 80% on another machine
		The time is reduced from 7ms to 1.5ms when using 32-bit kernel.
		The time is reduced from 10ms to 2ms when using 64-bit kernel.

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
---
 arch/x86/boot/compressed/misc.c |   29 +++++++++++++++++++++++------
 1 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 8f7bef8..23f315c 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -229,18 +229,35 @@ void *memset(void *s, int c, size_t n)
 		ss[i] = c;
 	return s;
 }
-
+#ifdef CONFIG_X86_32
 void *memcpy(void *dest, const void *src, size_t n)
 {
-	int i;
-	const char *s = src;
-	char *d = dest;
+	int d0, d1, d2;
+	asm volatile(
+		"rep ; movsl\n\t"
+		"movl %4,%%ecx\n\t"
+		"rep ; movsb\n\t"
+		: "=&c" (d0), "=&D" (d1), "=&S" (d2)
+		: "0" (n >> 2), "g" (n & 3), "1" (dest), "2" (src)
+		: "memory");
 
-	for (i = 0; i < n; i++)
-		d[i] = s[i];
 	return dest;
 }
+#else
+void *memcpy(void *dest, const void *src, size_t n)
+{
+	long d0, d1, d2;
+	asm volatile(
+		"rep ; movsq\n\t"
+		"movq %4,%%rcx\n\t"
+		"rep ; movsb\n\t"
+		: "=&c" (d0), "=&D" (d1), "=&S" (d2)
+		: "0" (n >> 3), "g" (n & 7), "1" (dest), "2" (src)
+		: "memory");
 
+	return dest;
+}
+#endif
 
 static void error(char *x)
 {
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-10-08  1:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-26  9:12 [PATCH] X86/Mem: Use string copy operation to optimze copy in kernel compression yakui.zhao
2010-10-08  1:47 yakui.zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).