All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] increase pipe size/buffers/atomicity :D
@ 2010-04-08  1:38 brian
  2010-04-08  5:11 ` Eric Dumazet
  2010-04-08 15:14 ` Steven J. Magnani
  0 siblings, 2 replies; 4+ messages in thread
From: brian @ 2010-04-08  1:38 UTC (permalink / raw)
  To: linux-kernel

(tested and working with 2.6.32.8 kernel, on a Athlon/686)


--- include/linux/pipe_fs_i.h.orig      2010-04-06 22:56:51.000000000 -0500
+++ include/linux/pipe_fs_i.h   2010-04-06 22:56:58.000000000 -0500
@@ -3,7 +3,7 @@

 #define PIPEFS_MAGIC 0x50495045

-#define PIPE_BUFFERS (16)
+#define PIPE_BUFFERS (32)

 #define PIPE_BUF_FLAG_LRU      0x01    /* page is on the LRU */
 #define PIPE_BUF_FLAG_ATOMIC   0x02    /* was atomically mapped */
--- include/asm-generic/page.h.orig     2010-04-06 22:57:08.000000000 -0500
+++ include/asm-generic/page.h  2010-04-06 22:57:23.000000000 -0500
@@ -12,7 +12,7 @@

 /* PAGE_SHIFT determines the page size */

-#define PAGE_SHIFT     12
+#define PAGE_SHIFT     13
 #ifdef __ASSEMBLY__
 #define PAGE_SIZE      (1 << PAGE_SHIFT)
 #else
--- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
+++ include/linux/limits.h      2010-04-06 22:56:28.000000000 -0500
@@ -10,7 +10,7 @@
 #define MAX_INPUT        255   /* size of the type-ahead buffer */
 #define NAME_MAX         255   /* # chars in a file name */
 #define PATH_MAX        4096   /* # chars in a path name including nul */
-#define PIPE_BUF        4096   /* # bytes in atomic write to a pipe */
+#define PIPE_BUF        8192   /* # bytes in atomic write to a pipe */
 #define XATTR_NAME_MAX   255   /* # chars in an extended attribute name */
 #define XATTR_SIZE_MAX 65536   /* size of an extended attribute value (64k) */
 #define XATTR_LIST_MAX 65536   /* size of extended attribute namelist (64k) */

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] increase pipe size/buffers/atomicity :D
  2010-04-08  1:38 [PATCH] increase pipe size/buffers/atomicity :D brian
@ 2010-04-08  5:11 ` Eric Dumazet
  2010-04-08 15:14 ` Steven J. Magnani
  1 sibling, 0 replies; 4+ messages in thread
From: Eric Dumazet @ 2010-04-08  5:11 UTC (permalink / raw)
  To: brian; +Cc: linux-kernel

Le mercredi 07 avril 2010 à 19:38 -0600, brian a écrit :
> (tested and working with 2.6.32.8 kernel, on a Athlon/686)
> 
> 
> --- include/linux/pipe_fs_i.h.orig      2010-04-06 22:56:51.000000000 -0500
> +++ include/linux/pipe_fs_i.h   2010-04-06 22:56:58.000000000 -0500
> @@ -3,7 +3,7 @@
> 
>  #define PIPEFS_MAGIC 0x50495045
> 
> -#define PIPE_BUFFERS (16)
> +#define PIPE_BUFFERS (32)

Doing such a thing puts high pressure on stack usage on some parts of
the kernel, and actually slow down some benches.




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] increase pipe size/buffers/atomicity :D
  2010-04-08  1:38 [PATCH] increase pipe size/buffers/atomicity :D brian
  2010-04-08  5:11 ` Eric Dumazet
@ 2010-04-08 15:14 ` Steven J. Magnani
  2010-04-09 19:50   ` Brian Haslett
  1 sibling, 1 reply; 4+ messages in thread
From: Steven J. Magnani @ 2010-04-08 15:14 UTC (permalink / raw)
  To: brian; +Cc: linux-kernel

Brian -

On Wed, 2010-04-07 at 19:38 -0600, brian wrote:
> (tested and working with 2.6.32.8 kernel, on a Athlon/686)

It would be good to know what issue this addresses. Gives people a way
to weigh any side-effects/drawbacks against the benefits, and an
opportunity to suggest alternate/better approaches.

> --- include/linux/pipe_fs_i.h.orig      2010-04-06 22:56:51.000000000 -0500
> +++ include/linux/pipe_fs_i.h   2010-04-06 22:56:58.000000000 -0500
> @@ -3,7 +3,7 @@
> 
>  #define PIPEFS_MAGIC 0x50495045
> 
> -#define PIPE_BUFFERS (16)
> +#define PIPE_BUFFERS (32)

This worries me. In several places there are functions with 2 or 3
pointer arrays of dimension [PIPE_BUFFERS] on the stack. So this adds
anywhere from 128 to 384 bytes to the stack in these functions depending
on sizeof(void*) and the number of arrays.

> 
>  #define PIPE_BUF_FLAG_LRU      0x01    /* page is on the LRU */
>  #define PIPE_BUF_FLAG_ATOMIC   0x02    /* was atomically mapped */
> --- include/asm-generic/page.h.orig     2010-04-06 22:57:08.000000000 -0500
> +++ include/asm-generic/page.h  2010-04-06 22:57:23.000000000 -0500
> @@ -12,7 +12,7 @@
> 
>  /* PAGE_SHIFT determines the page size */
> 
> -#define PAGE_SHIFT     12
> +#define PAGE_SHIFT     13

This has pretty wide-ranging implications, both within and across
arches. I don't think it's something that can be changed easily. Also I
don't believe this #define used in your configuration (Athlon/686)
unless you're running without a MMU.

>  #ifdef __ASSEMBLY__
>  #define PAGE_SIZE      (1 << PAGE_SHIFT)
>  #else
> --- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
> +++ include/linux/limits.h      2010-04-06 22:56:28.000000000 -0500
> @@ -10,7 +10,7 @@
>  #define MAX_INPUT        255   /* size of the type-ahead buffer */
>  #define NAME_MAX         255   /* # chars in a file name */
>  #define PATH_MAX        4096   /* # chars in a path name including nul */
> -#define PIPE_BUF        4096   /* # bytes in atomic write to a pipe */
> +#define PIPE_BUF        8192   /* # bytes in atomic write to a pipe */

I don't see this being used within the kernel, so I assume its a
userspace representation of PAGE_SIZE (ARM seems to associate these
explicitly). I would think you'd need to rebuild your glibc or
equivalent to notice any difference from a change.

Regards,
------------------------------------------------------------------------
 Steven J. Magnani               "I claim this network for MARS!
 www.digidescorp.com              Earthling, return my space modulator!"

 #include <standard.disclaimer>




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] increase pipe size/buffers/atomicity :D
  2010-04-08 15:14 ` Steven J. Magnani
@ 2010-04-09 19:50   ` Brian Haslett
  0 siblings, 0 replies; 4+ messages in thread
From: Brian Haslett @ 2010-04-09 19:50 UTC (permalink / raw)
  To: steve; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3722 bytes --]

> On Wed, 2010-04-07 at 19:38 -0600, brian wrote:
>> (tested and working with 2.6.32.8 kernel, on a Athlon/686)
>
> It would be good to know what issue this addresses. Gives people a way
> to weigh any side-effects/drawbacks against the benefits, and an
> opportunity to suggest alternate/better approaches.
>

I wouldn't say it addresses anything that I'd really consider broken;
it started as a personal experiment of mine, aimed at some little
performance gain.  I figured, hey, bigger pipes, why not? Looks like
these pipe sizes have practically been around since the epoch.


>>  #define PIPE_BUF_FLAG_LRU      0x01    /* page is on the LRU */
>>  #define PIPE_BUF_FLAG_ATOMIC   0x02    /* was atomically mapped */
>> --- include/asm-generic/page.h.orig     2010-04-06 22:57:08.000000000
>> -0500
>> +++ include/asm-generic/page.h  2010-04-06 22:57:23.000000000 -0500
>> @@ -12,7 +12,7 @@
>>
>>  /* PAGE_SHIFT determines the page size */
>>
>> -#define PAGE_SHIFT     12
>> +#define PAGE_SHIFT     13
>
> This has pretty wide-ranging implications, both within and across
> arches. I don't think it's something that can be changed easily. Also I
> don't believe this #define used in your configuration (Athlon/686)
> unless you're running without a MMU.
>

actually, the reason I went after this, gets into the only reason I
started this whole ordeal to begin with, line#135 in pipe_fs_i.h that
reads "#define PIPE_SIZE    PAGE_SIZE".


>>  #ifdef __ASSEMBLY__
>>  #define PAGE_SIZE      (1 << PAGE_SHIFT)
>>  #else
>> --- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
>> +++ include/linux/limits.h      2010-04-06 22:56:28.000000000 -0500
>> @@ -10,7 +10,7 @@
>>  #define MAX_INPUT        255   /* size of the type-ahead buffer */
>>  #define NAME_MAX         255   /* # chars in a file name */
>>  #define PATH_MAX        4096   /* # chars in a path name including nul */
>> -#define PIPE_BUF        4096   /* # bytes in atomic write to a pipe */
>> +#define PIPE_BUF        8192   /* # bytes in atomic write to a pipe */
>

You'd think so (according to some posts I'd read before I tried this),
but I actually tried several variations on a few things, and until I
changed *this one in particular*, my kernel would in fact boot up
fine, but the shell/init/system phase itself would start giving me
errors to the effect of "unable to create pipe" and "too many file
descriptors open" over and over again.

>> --- include/linux/pipe_fs_i.h.orig      2010-04-06 22:56:51.000000000
>> -0500
>> +++ include/linux/pipe_fs_i.h   2010-04-06 22:56:58.000000000 -0500
>> @@ -3,7 +3,7 @@
>>
>>  #define PIPEFS_MAGIC 0x50495045
>>
>> -#define PIPE_BUFFERS (16)
>> +#define PIPE_BUFFERS (32)
>
> This worries me. In several places there are functions with 2 or 3
> pointer arrays of dimension [PIPE_BUFFERS] on the stack. So this adds
> anywhere from 128 to 384 bytes to the stack in these functions depending
> on sizeof(void*) and the number of arrays.
>

As my initial hope/goal was to just increase the size of the pipes, I
figured I may as well increase the buffers as well (although I'll
admit ignorance to not having poked around every little .c/.h file
that calls it).

I guess I wasn't seriously trying to push anyone into jumping through
hoops for this thing, I was just a little excited and figured I'd
share with you all.  I probably spent the better part of a few days
either researching, poking around the kernel headers, or experimenting
with different combinations.   As such, I've attached a .txt file
explaining the controlled (but probably not as thorough as you're used
to) benchmark I ran.   It's not a pretty graph, I know, but gimme a
break, I wrote it in vim and did the math with bc ;)

[-- Attachment #2: benchmark1.txt --]
[-- Type: text/plain, Size: 4418 bytes --]

%%%%%%%%%%%%%
WITHOUT PATCH
%%%%%%%%%%%%%

dd if=/dev/zero of=/root/benchmark bs=512 count=20000
20000+0 records in
20000+0 records out
10240000 bytes (10 MB) copied, 0.674347 s, 15.2 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.89386 s, 22.9 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=20000
20000+0 records in
20000+0 records out
40960000 bytes (41 MB) copied, 1.36237 s, 30.1 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=20000
20000+0 records in
20000+0 records out
81920000 bytes (82 MB) copied, 2.81037 s, 29.1 MB/s

===============20000 blocks written averaged 24.325 MB/s
========================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=40000
40000+0 records in
40000+0 records out
20480000 bytes (20 MB) copied, 1.31354 s, 15.6 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=40000
40000+0 records in
40000+0 records out
40960000 bytes (41 MB) copied, 1.8173 s, 22.5 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=40000
40000+0 records in
40000+0 records out
81920000 bytes (82 MB) copied, 3.23683 s, 25.3 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=40000
40000+0 records in
40000+0 records out
163840000 bytes (164 MB) copied, 6.79296 s, 24.1 MB/s

================== 40000 blocks written averaged 21.875 MB/s
============================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=80000
80000+0 records in
80000+0 records out
40960000 bytes (41 MB) copied, 2.70969 s, 15.1 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=80000
80000+0 records in
80000+0 records out
81920000 bytes (82 MB) copied, 4.25879 s, 19.2 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=80000
80000+0 records in
80000+0 records out
163840000 bytes (164 MB) copied, 7.28753 s, 22.5 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=80000
80000+0 records in
80000+0 records out
327680000 bytes (328 MB) copied, 13.5436 s, 24.2 MB/s

==================== 80000 blocks written averaged 22.75 MB/s
=============================================================

%%%%%%%%%%
WITH PATCH (!)
%%%%%%%%%%

dd if=/dev/zero of=/root/benchmark bs=512 count=20000
20000+0 records in
20000+0 records out
10240000 bytes (10 MB) copied, 0.354359 s, 28.9 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.474818 s, 43.1 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=20000
20000+0 records in
20000+0 records out
40960000 bytes (41 MB) copied, 0.790466 s, 51.8 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=20000
20000+0 records in
20000+0 records out
81920000 bytes (82 MB) copied, 1.51956 s, 53.9 MB/s

================= 40000 blocks written averaged 44.425 MB/s (+82.6%)
====================================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=40000
40000+0 records in
40000+0 records out
20480000 bytes (20 MB) copied, 0.731345 s, 28.0 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=40000
40000+0 records in
40000+0 records out
40960000 bytes (41 MB) copied, 1.06329 s, 38.5 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=40000
40000+0 records in
40000+0 records out
81920000 bytes (82 MB) copied, 1.85218 s, 44.2 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=40000
40000+0 records in
40000+0 records out
163840000 bytes (164 MB) copied, 4.08386 s, 40.1 MB/s

================= 40000 blocks written averaged 37.7 MB/s (+72.3%)
==================================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=80000
80000+0 records in
80000+0 records out
40960000 bytes (41 MB) copied, 1.59573 s, 25.7 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=80000
80000+0 records in
80000+0 records out
81920000 bytes (82 MB) copied, 2.51223 s, 32.6 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=80000
80000+0 records in
80000+0 records out
163840000 bytes (164 MB) copied, 4.59659 s, 35.6 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=80000
80000+0 records in
80000+0 records out
327680000 bytes (328 MB) copied, 10.3018 s, 31.8 MB/s
(31.425-22.75)/22.75

=================== 80000 blocks written averaged 31.425 MB/s (+38.1%)
=====================================================================

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-04-09 19:51 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-08  1:38 [PATCH] increase pipe size/buffers/atomicity :D brian
2010-04-08  5:11 ` Eric Dumazet
2010-04-08 15:14 ` Steven J. Magnani
2010-04-09 19:50   ` Brian Haslett

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.