All of lore.kernel.org
 help / color / mirror / Atom feed
* [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync
@ 2023-01-16  7:41 Wei Gao via ltp
  2023-01-16 15:08 ` Richard Palethorpe
  0 siblings, 1 reply; 6+ messages in thread
From: Wei Gao via ltp @ 2023-01-16  7:41 UTC (permalink / raw)
  To: ltp

Use fsync on test file instead of sync should faster than syncing
whole system.

Signed-off-by: Wei Gao <wegao@suse.com>
---
 .../kernel/syscalls/readahead/readahead02.c     | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/testcases/kernel/syscalls/readahead/readahead02.c b/testcases/kernel/syscalls/readahead/readahead02.c
index 7acf4bb18..e04046bc3 100644
--- a/testcases/kernel/syscalls/readahead/readahead02.c
+++ b/testcases/kernel/syscalls/readahead/readahead02.c
@@ -99,6 +99,17 @@ static void drop_caches(void)
 	SAFE_FILE_PRINTF(DROP_CACHES_FNAME, "1");
 }
 
+static void sync_drop_caches(void)
+{
+	int fd;
+
+	fd  = SAFE_OPEN(testfile, O_RDONLY);
+	if (fsync(fd) == -1)
+		tst_brk(TBROK | TERRNO, "fsync()");
+	SAFE_CLOSE(fd);
+	drop_caches();
+}
+
 static unsigned long get_bytes_read(void)
 {
 	unsigned long ret;
@@ -233,8 +244,7 @@ static void test_readahead(unsigned int n)
 	read_testfile(tc, 0, testfile, testfile_size, &read_bytes, &usec,
 		      &cached);
 	cached_high = get_cached_size();
-	sync();
-	drop_caches();
+	sync_drop_caches();
 	cached_low = get_cached_size();
 	cached_max = MAX(cached_max, cached_high - cached_low);
 
@@ -246,8 +256,7 @@ static void test_readahead(unsigned int n)
 	else
 		cached = 0;
 
-	sync();
-	drop_caches();
+	sync_drop_caches();
 	cached_low = get_cached_size();
 	tst_res(TINFO, "read_testfile(1)");
 	ret = read_testfile(tc, 1, testfile, testfile_size, &read_bytes_ra,
-- 
2.35.3


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync
  2023-01-16  7:41 [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync Wei Gao via ltp
@ 2023-01-16 15:08 ` Richard Palethorpe
  2023-01-17  2:22   ` Wei Gao via ltp
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Palethorpe @ 2023-01-16 15:08 UTC (permalink / raw)
  To: Wei Gao; +Cc: ltp

Hello,

Wei Gao via ltp <ltp@lists.linux.it> writes:

> Use fsync on test file instead of sync should faster than syncing
> whole system.

The test completes in less than a second in OpenQA. We don't want to
risk introducing a regression or spend time reviewing changes unless the
performance improvement solves a timeout.

I suggest you convert dup06 to the new API (for example) or investigate
a test failure.

>
> Signed-off-by: Wei Gao <wegao@suse.com>
> ---
>  .../kernel/syscalls/readahead/readahead02.c     | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
>
> diff --git a/testcases/kernel/syscalls/readahead/readahead02.c b/testcases/kernel/syscalls/readahead/readahead02.c
> index 7acf4bb18..e04046bc3 100644
> --- a/testcases/kernel/syscalls/readahead/readahead02.c
> +++ b/testcases/kernel/syscalls/readahead/readahead02.c
> @@ -99,6 +99,17 @@ static void drop_caches(void)
>  	SAFE_FILE_PRINTF(DROP_CACHES_FNAME, "1");
>  }
>  
> +static void sync_drop_caches(void)
> +{
> +	int fd;
> +
> +	fd  = SAFE_OPEN(testfile, O_RDONLY);
> +	if (fsync(fd) == -1)
> +		tst_brk(TBROK | TERRNO, "fsync()");
> +	SAFE_CLOSE(fd);
> +	drop_caches();
> +}
> +
>  static unsigned long get_bytes_read(void)
>  {
>  	unsigned long ret;
> @@ -233,8 +244,7 @@ static void test_readahead(unsigned int n)
>  	read_testfile(tc, 0, testfile, testfile_size, &read_bytes, &usec,
>  		      &cached);
>  	cached_high = get_cached_size();
> -	sync();
> -	drop_caches();
> +	sync_drop_caches();
>  	cached_low = get_cached_size();
>  	cached_max = MAX(cached_max, cached_high - cached_low);
>  
> @@ -246,8 +256,7 @@ static void test_readahead(unsigned int n)
>  	else
>  		cached = 0;
>  
> -	sync();
> -	drop_caches();
> +	sync_drop_caches();
>  	cached_low = get_cached_size();
>  	tst_res(TINFO, "read_testfile(1)");
>  	ret = read_testfile(tc, 1, testfile, testfile_size, &read_bytes_ra,
> -- 
> 2.35.3


-- 
Thank you,
Richard.

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync
  2023-01-16 15:08 ` Richard Palethorpe
@ 2023-01-17  2:22   ` Wei Gao via ltp
  2023-01-17  9:23     ` Richard Palethorpe
  0 siblings, 1 reply; 6+ messages in thread
From: Wei Gao via ltp @ 2023-01-17  2:22 UTC (permalink / raw)
  To: Richard Palethorpe; +Cc: ltp

On Mon, Jan 16, 2023 at 03:08:44PM +0000, Richard Palethorpe wrote:
> Hello,
> 
> Wei Gao via ltp <ltp@lists.linux.it> writes:
> 
> > Use fsync on test file instead of sync should faster than syncing
> > whole system.
> 
> The test completes in less than a second in OpenQA. We don't want to
> risk introducing a regression or spend time reviewing changes unless the
> performance improvement solves a timeout.
> 
> I suggest you convert dup06 to the new API (for example) or investigate
> a test failure.
> 
The motivation of this change is base on the https://github.com/linux-test-project/ltp/issues/972
which give following suggestion:
"As we run the test inside a loop device I guess that we can also 
sync and drop caches just for the device, which should be faster 
than syncing and dropping the whole system. Possibly we just need 
to umount it and mount it again."

But currently i can not find any API to sync and drop caches just 
ONLY for device, so base my view just replace sync whole 
system to single file also can make a small help.

> >
> > Signed-off-by: Wei Gao <wegao@suse.com>
> > ---
> >  .../kernel/syscalls/readahead/readahead02.c     | 17 +++++++++++++----
> >  1 file changed, 13 insertions(+), 4 deletions(-)
> >
> > diff --git a/testcases/kernel/syscalls/readahead/readahead02.c b/testcases/kernel/syscalls/readahead/readahead02.c
> > index 7acf4bb18..e04046bc3 100644
> > --- a/testcases/kernel/syscalls/readahead/readahead02.c
> > +++ b/testcases/kernel/syscalls/readahead/readahead02.c
> > @@ -99,6 +99,17 @@ static void drop_caches(void)
> >  	SAFE_FILE_PRINTF(DROP_CACHES_FNAME, "1");
> >  }
> >  
> > +static void sync_drop_caches(void)
> > +{
> > +	int fd;
> > +
> > +	fd  = SAFE_OPEN(testfile, O_RDONLY);
> > +	if (fsync(fd) == -1)
> > +		tst_brk(TBROK | TERRNO, "fsync()");
> > +	SAFE_CLOSE(fd);
> > +	drop_caches();
> > +}
> > +
> >  static unsigned long get_bytes_read(void)
> >  {
> >  	unsigned long ret;
> > @@ -233,8 +244,7 @@ static void test_readahead(unsigned int n)
> >  	read_testfile(tc, 0, testfile, testfile_size, &read_bytes, &usec,
> >  		      &cached);
> >  	cached_high = get_cached_size();
> > -	sync();
> > -	drop_caches();
> > +	sync_drop_caches();
> >  	cached_low = get_cached_size();
> >  	cached_max = MAX(cached_max, cached_high - cached_low);
> >  
> > @@ -246,8 +256,7 @@ static void test_readahead(unsigned int n)
> >  	else
> >  		cached = 0;
> >  
> > -	sync();
> > -	drop_caches();
> > +	sync_drop_caches();
> >  	cached_low = get_cached_size();
> >  	tst_res(TINFO, "read_testfile(1)");
> >  	ret = read_testfile(tc, 1, testfile, testfile_size, &read_bytes_ra,
> > -- 
> > 2.35.3
> 
> 
> -- 
> Thank you,
> Richard.

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync
  2023-01-17  2:22   ` Wei Gao via ltp
@ 2023-01-17  9:23     ` Richard Palethorpe
  2023-01-17 16:33       ` Cyril Hrubis
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Palethorpe @ 2023-01-17  9:23 UTC (permalink / raw)
  To: Wei Gao, Cyril Hrubis; +Cc: ltp

Hello,

Wei Gao <wegao@suse.com> writes:

> On Mon, Jan 16, 2023 at 03:08:44PM +0000, Richard Palethorpe wrote:
>> Hello,
>> 
>> Wei Gao via ltp <ltp@lists.linux.it> writes:
>> 
>> > Use fsync on test file instead of sync should faster than syncing
>> > whole system.
>> 
>> The test completes in less than a second in OpenQA. We don't want to
>> risk introducing a regression or spend time reviewing changes unless the
>> performance improvement solves a timeout.
>> 
>> I suggest you convert dup06 to the new API (for example) or investigate
>> a test failure.
>> 
> The motivation of this change is base on the https://github.com/linux-test-project/ltp/issues/972
> which give following suggestion:
> "As we run the test inside a loop device I guess that we can also 
> sync and drop caches just for the device, which should be faster 
> than syncing and dropping the whole system. Possibly we just need 
> to umount it and mount it again."

I see. Well unless Cyril can show that the test is actually failing
somewhere (or there is a strong logical argument this will cause a
failure). Then this task is still valid, but low priority IMO.

>
> But currently i can not find any API to sync and drop caches just 
> ONLY for device, so base my view just replace sync whole 
> system to single file also can make a small help.

If we don't have one or more concrete failures to focus on then we
really have to research whether fsync (or syncfs FYI) or unmounting the
device are the correct thing to do. They will all have subtly different
effects.

>
>> >
>> > Signed-off-by: Wei Gao <wegao@suse.com>
>> > ---
>> >  .../kernel/syscalls/readahead/readahead02.c     | 17 +++++++++++++----
>> >  1 file changed, 13 insertions(+), 4 deletions(-)
>> >
>> > diff --git a/testcases/kernel/syscalls/readahead/readahead02.c b/testcases/kernel/syscalls/readahead/readahead02.c
>> > index 7acf4bb18..e04046bc3 100644
>> > --- a/testcases/kernel/syscalls/readahead/readahead02.c
>> > +++ b/testcases/kernel/syscalls/readahead/readahead02.c
>> > @@ -99,6 +99,17 @@ static void drop_caches(void)
>> >  	SAFE_FILE_PRINTF(DROP_CACHES_FNAME, "1");
>> >  }
>> >  
>> > +static void sync_drop_caches(void)
>> > +{
>> > +	int fd;
>> > +
>> > +	fd  = SAFE_OPEN(testfile, O_RDONLY);
>> > +	if (fsync(fd) == -1)
>> > +		tst_brk(TBROK | TERRNO, "fsync()");
>> > +	SAFE_CLOSE(fd);
>> > +	drop_caches();
>> > +}
>> > +
>> >  static unsigned long get_bytes_read(void)
>> >  {
>> >  	unsigned long ret;
>> > @@ -233,8 +244,7 @@ static void test_readahead(unsigned int n)
>> >  	read_testfile(tc, 0, testfile, testfile_size, &read_bytes, &usec,
>> >  		      &cached);
>> >  	cached_high = get_cached_size();
>> > -	sync();
>> > -	drop_caches();
>> > +	sync_drop_caches();
>> >  	cached_low = get_cached_size();
>> >  	cached_max = MAX(cached_max, cached_high - cached_low);
>> >  
>> > @@ -246,8 +256,7 @@ static void test_readahead(unsigned int n)
>> >  	else
>> >  		cached = 0;
>> >  
>> > -	sync();
>> > -	drop_caches();
>> > +	sync_drop_caches();
>> >  	cached_low = get_cached_size();
>> >  	tst_res(TINFO, "read_testfile(1)");
>> >  	ret = read_testfile(tc, 1, testfile, testfile_size, &read_bytes_ra,
>> > -- 
>> > 2.35.3
>> 
>> 
>> -- 
>> Thank you,
>> Richard.


-- 
Thank you,
Richard.

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync
  2023-01-17  9:23     ` Richard Palethorpe
@ 2023-01-17 16:33       ` Cyril Hrubis
  2023-01-17 16:50         ` Richard Palethorpe
  0 siblings, 1 reply; 6+ messages in thread
From: Cyril Hrubis @ 2023-01-17 16:33 UTC (permalink / raw)
  To: Richard Palethorpe; +Cc: ltp

Hi!
> > The motivation of this change is base on the https://github.com/linux-test-project/ltp/issues/972
> > which give following suggestion:
> > "As we run the test inside a loop device I guess that we can also 
> > sync and drop caches just for the device, which should be faster 
> > than syncing and dropping the whole system. Possibly we just need 
> > to umount it and mount it again."
> 
> I see. Well unless Cyril can show that the test is actually failing
> somewhere (or there is a strong logical argument this will cause a
> failure). Then this task is still valid, but low priority IMO.

We do sync more than needed here, since we are looking at the per device
counters we have to sync just the device we mount for the test, so this
is optimization for the case that the system has many dirty cases and
will need seconds or a minute to write them to the pernament storage.

> > But currently i can not find any API to sync and drop caches just 
> > ONLY for device, so base my view just replace sync whole 
> > system to single file also can make a small help.
> 
> If we don't have one or more concrete failures to focus on then we
> really have to research whether fsync (or syncfs FYI) or unmounting the
> device are the correct thing to do. They will all have subtly different
> effects.

Looking at the code closely I'm starting to think that the sync is not
required at all. What we do in the test is that we create file and sync
it to the external storage. Then we read it a few times and mesure
differences in cache. As far as I can tell we just need to drop the page
cache after we have read the file. What do you think?

In any case I would avoid changing the test before the release, but it's
certainly something we can look at after that.

-- 
Cyril Hrubis
chrubis@suse.cz

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync
  2023-01-17 16:33       ` Cyril Hrubis
@ 2023-01-17 16:50         ` Richard Palethorpe
  0 siblings, 0 replies; 6+ messages in thread
From: Richard Palethorpe @ 2023-01-17 16:50 UTC (permalink / raw)
  To: Cyril Hrubis; +Cc: ltp

Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
>> > The motivation of this change is base on the https://github.com/linux-test-project/ltp/issues/972
>> > which give following suggestion:
>> > "As we run the test inside a loop device I guess that we can also 
>> > sync and drop caches just for the device, which should be faster 
>> > than syncing and dropping the whole system. Possibly we just need 
>> > to umount it and mount it again."
>> 
>> I see. Well unless Cyril can show that the test is actually failing
>> somewhere (or there is a strong logical argument this will cause a
>> failure). Then this task is still valid, but low priority IMO.
>
> We do sync more than needed here, since we are looking at the per device
> counters we have to sync just the device we mount for the test, so this
> is optimization for the case that the system has many dirty cases and
> will need seconds or a minute to write them to the pernament storage.
>
>> > But currently i can not find any API to sync and drop caches just 
>> > ONLY for device, so base my view just replace sync whole 
>> > system to single file also can make a small help.
>> 
>> If we don't have one or more concrete failures to focus on then we
>> really have to research whether fsync (or syncfs FYI) or unmounting the
>> device are the correct thing to do. They will all have subtly different
>> effects.
>
> Looking at the code closely I'm starting to think that the sync is not
> required at all. What we do in the test is that we create file and sync
> it to the external storage. Then we read it a few times and mesure
> differences in cache. As far as I can tell we just need to drop the page
> cache after we have read the file. What do you think?
>
> In any case I would avoid changing the test before the release, but it's
> certainly something we can look at after that.

I still think same as before. It may be valid to drop sync or whatever,
but it's just not important compared to actively failing tests.

-- 
Thank you,
Richard.

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-01-17 16:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-16  7:41 [LTP] [PATCH v1] readahead02.c: Use fsync instead of sync Wei Gao via ltp
2023-01-16 15:08 ` Richard Palethorpe
2023-01-17  2:22   ` Wei Gao via ltp
2023-01-17  9:23     ` Richard Palethorpe
2023-01-17 16:33       ` Cyril Hrubis
2023-01-17 16:50         ` Richard Palethorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.