linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PERFORMANCE]fs: sendfile suffer performance degradation when buffer size have performance impact on underling IO
@ 2023-10-21  0:19 David Wang
  2023-10-22 22:53 ` Dave Chinner
  0 siblings, 1 reply; 3+ messages in thread
From: David Wang @ 2023-10-21  0:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-kernel

Hi, 

I was trying to confirm the performance improvement via replacing read/write sequences with sendfile, 
But I got quite a surprising result:

$ gcc -DUSE_SENDFILE cp.cpp
$ time ./a.out 

real	0m56.121s
user	0m0.000s
sys	0m4.844s

$ gcc  cp.cpp
$ time ./a.out 

real	0m27.363s
user	0m0.014s
sys	0m4.443s

The result show that, in my test scenario,  the read/write sequences only use half of the time by sendfile.
My guess is that sendfile using a default pipe with buffer size 1<<16 (16 pages), which is not tuned for the underling IO, 
hence a read/write sequences with buffer size 1<<17 is much faster than sendfile.

But the problem with sendfile is that there is no parameter to tune the buffer size from userspace...Any chance to fix this?

The test code is as following:

#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/sendfile.h>
#include <fcntl.h>

char buf[1<<17];   // much better than 1<<16
int main() {
	int i, fin, fout, n, m;
	for (i=0; i<128; i++) {
		// dd if=/dev/urandom of=./bigfile bs=131072 count=256
		fin  = open("./bigfile", O_RDONLY);
		fout = open("./target", O_WRONLY | O_CREAT | O_DSYNC, S_IWUSR);
#ifndef USE_SENDFILE 
		while(1) {
			n = read(fin, buf, sizeof(buf));
			if (n==0) break;
			m = write(fout, buf, n);
			if (n != m) {
				printf("fail to write, expect %d, actual %d\n", n, m);
				perror(":");
				return 1;
			}
		}
#else
		off_t offset = 0;
		struct stat st;
		if (fstat(fin, &st) != 0) {
			perror("fail to fstat\n");
			return 1;
		}
		sendfile(fout, fin, &offset, st.st_size);

#endif
		close(fin);
		close(fout);

	}
	return 0;
}

FYI
David


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PERFORMANCE]fs: sendfile suffer performance degradation when buffer size have performance impact on underling IO
  2023-10-21  0:19 [PERFORMANCE]fs: sendfile suffer performance degradation when buffer size have performance impact on underling IO David Wang
@ 2023-10-22 22:53 ` Dave Chinner
  2023-10-23  2:16   ` David Wang
  0 siblings, 1 reply; 3+ messages in thread
From: Dave Chinner @ 2023-10-22 22:53 UTC (permalink / raw)
  To: David Wang; +Cc: linux-fsdevel, linux-kernel

On Sat, Oct 21, 2023 at 08:19:34AM +0800, David Wang wrote:
> Hi, 
> 
> I was trying to confirm the performance improvement via replacing read/write sequences with sendfile, 
> But I got quite a surprising result:
> 
> $ gcc -DUSE_SENDFILE cp.cpp
> $ time ./a.out 
> 
> real	0m56.121s
> user	0m0.000s
> sys	0m4.844s
> 
> $ gcc  cp.cpp
> $ time ./a.out 
> 
> real	0m27.363s
> user	0m0.014s
> sys	0m4.443s
> 
> The result show that, in my test scenario,  the read/write sequences only use half of the time by sendfile.
> My guess is that sendfile using a default pipe with buffer size 1<<16 (16 pages), which is not tuned for the underling IO, 
> hence a read/write sequences with buffer size 1<<17 is much faster than sendfile.

Nope, it's just that you are forcing sendfile to do synchronous IO
on each internal loop. i.e:

> But the problem with sendfile is that there is no parameter to tune the buffer size from userspace...Any chance to fix this?
> 
> The test code is as following:
> 
> #include <stdio.h>
> #include <unistd.h>
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <sys/sendfile.h>
> #include <fcntl.h>
> 
> char buf[1<<17];   // much better than 1<<16
> int main() {
> 	int i, fin, fout, n, m;
> 	for (i=0; i<128; i++) {
> 		// dd if=/dev/urandom of=./bigfile bs=131072 count=256
> 		fin  = open("./bigfile", O_RDONLY);
> 		fout = open("./target", O_WRONLY | O_CREAT | O_DSYNC, S_IWUSR);

O_DSYNC is the problem here.

This forces an IO to disk for every write IO submission from
sendfile to the filesystem. For synchronous IO (as in "waiting for
completion before sending the next IO), a larger IO size will
*always* move data faster to storage.

FWIW, you'll get the same behaviour if you use O_DIRECT for either
source or destination file with sendfile - synchronous 64kB IOs are
a massive performance limitation even without O_DSYNC.

IOWs, don't use sendfile like this. Use buffered IO and
sendfile(fd); fdatasync(fd); if you need data integrity guarantees
and you won't see any perf problems resulting from the size of the
internal sendfile buffer....

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PERFORMANCE]fs: sendfile suffer performance degradation when buffer size have performance impact on underling IO
  2023-10-22 22:53 ` Dave Chinner
@ 2023-10-23  2:16   ` David Wang
  0 siblings, 0 replies; 3+ messages in thread
From: David Wang @ 2023-10-23  2:16 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-fsdevel, linux-kernel




At 2023-10-23 06:53:17, "Dave Chinner" <david@fromorbit.com> wrote:

>
>O_DSYNC is the problem here.
>
>This forces an IO to disk for every write IO submission from
>sendfile to the filesystem. For synchronous IO (as in "waiting for
>completion before sending the next IO), a larger IO size will
>*always* move data faster to storage.
>
>FWIW, you'll get the same behaviour if you use O_DIRECT for either
>source or destination file with sendfile - synchronous 64kB IOs are
>a massive performance limitation even without O_DSYNC.
>
>IOWs, don't use sendfile like this. Use buffered IO and
>sendfile(fd); fdatasync(fd); if you need data integrity guarantees
>and you won't see any perf problems resulting from the size of the
>internal sendfile buffer....
>
>-Dave.
>-- 
>Dave Chinner
>david@fromorbit.com

Thanks for the information, and Yes, buffered IO shows no significant 
performance difference.
Feel that this usage caveat should be recorded in the "NOTE" section of man page for sendfile.

Thanks
David

 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-10-23  2:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-21  0:19 [PERFORMANCE]fs: sendfile suffer performance degradation when buffer size have performance impact on underling IO David Wang
2023-10-22 22:53 ` Dave Chinner
2023-10-23  2:16   ` David Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).