* [LTP] [PATCH] read_all: scale down how many times we read by default
@ 2020-06-09 10:55 Jan Stancek
2020-06-09 12:27 ` Li Wang
2020-06-09 12:45 ` Richard Palethorpe
0 siblings, 2 replies; 4+ messages in thread
From: Jan Stancek @ 2020-06-09 10:55 UTC (permalink / raw)
To: ltp
read_all is running into timeouts on high cpu systems, where
access to some per-cpu files is protected by a lock. Latest
example is /sys/kernel/tracing/per_cpu/*.
At the moment we read each file 10 times, and we have been
excluding files that take too long. Rather than expanding
blacklist, scale the default down to 3.
Signed-off-by: Jan Stancek <jstancek@redhat.com>
---
runtest/fs | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/runtest/fs b/runtest/fs
index 464ba8fb9686..5892e9fdaee5 100644
--- a/runtest/fs
+++ b/runtest/fs
@@ -69,9 +69,9 @@ fs_di fs_di -d $TMPDIR
# Was not sure why it should reside in runtest/crashme and won't get tested ever
proc01 proc01 -m 128
-read_all_dev read_all -d /dev -p -q -r 10
-read_all_proc read_all -d /proc -q -r 10
-read_all_sys read_all -d /sys -q -r 10
+read_all_dev read_all -d /dev -p -q -r 3
+read_all_proc read_all -d /proc -q -r 3
+read_all_sys read_all -d /sys -q -r 3
#Run the File System Race Condition Check tests as well
fs_racer fs_racer.sh -t 5
--
2.18.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [LTP] [PATCH] read_all: scale down how many times we read by default
2020-06-09 10:55 [LTP] [PATCH] read_all: scale down how many times we read by default Jan Stancek
@ 2020-06-09 12:27 ` Li Wang
2020-06-09 12:45 ` Richard Palethorpe
1 sibling, 0 replies; 4+ messages in thread
From: Li Wang @ 2020-06-09 12:27 UTC (permalink / raw)
To: ltp
[Cc Richard] in case he has extra suggest.
On Tue, Jun 9, 2020 at 6:56 PM Jan Stancek <jstancek@redhat.com> wrote:
> read_all is running into timeouts on high cpu systems, where
> access to some per-cpu files is protected by a lock. Latest
> example is /sys/kernel/tracing/per_cpu/*.
>
> At the moment we read each file 10 times, and we have been
> excluding files that take too long. Rather than expanding
> blacklist, scale the default down to 3.
>
> Signed-off-by: Jan Stancek <jstancek@redhat.com>
>
Reviewed-by: Li Wang <liwang@redhat.com>
--
Regards,
Li Wang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20200609/b70c248a/attachment.htm>
^ permalink raw reply [flat|nested] 4+ messages in thread
* [LTP] [PATCH] read_all: scale down how many times we read by default
2020-06-09 10:55 [LTP] [PATCH] read_all: scale down how many times we read by default Jan Stancek
2020-06-09 12:27 ` Li Wang
@ 2020-06-09 12:45 ` Richard Palethorpe
2020-06-09 13:33 ` Jan Stancek
1 sibling, 1 reply; 4+ messages in thread
From: Richard Palethorpe @ 2020-06-09 12:45 UTC (permalink / raw)
To: ltp
Hello,
Jan Stancek <jstancek@redhat.com> writes:
> read_all is running into timeouts on high cpu systems, where
> access to some per-cpu files is protected by a lock. Latest
> example is /sys/kernel/tracing/per_cpu/*.
>
> At the moment we read each file 10 times, and we have been
> excluding files that take too long. Rather than expanding
> blacklist, scale the default down to 3.
>
> Signed-off-by: Jan Stancek <jstancek@redhat.com>
> ---
> runtest/fs | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/runtest/fs b/runtest/fs
> index 464ba8fb9686..5892e9fdaee5 100644
> --- a/runtest/fs
> +++ b/runtest/fs
> @@ -69,9 +69,9 @@ fs_di fs_di -d $TMPDIR
> # Was not sure why it should reside in runtest/crashme and won't get tested ever
> proc01 proc01 -m 128
>
> -read_all_dev read_all -d /dev -p -q -r 10
> -read_all_proc read_all -d /proc -q -r 10
> -read_all_sys read_all -d /sys -q -r 10
> +read_all_dev read_all -d /dev -p -q -r 3
> +read_all_proc read_all -d /proc -q -r 3
> +read_all_sys read_all -d /sys -q -r 3
>
> #Run the File System Race Condition Check tests as well
> fs_racer fs_racer.sh -t 5
> --
> 2.18.1
OK this makes sense. We shouldn't be stress testing the system in this
runtest file.
--
Thank you,
Richard.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [LTP] [PATCH] read_all: scale down how many times we read by default
2020-06-09 12:45 ` Richard Palethorpe
@ 2020-06-09 13:33 ` Jan Stancek
0 siblings, 0 replies; 4+ messages in thread
From: Jan Stancek @ 2020-06-09 13:33 UTC (permalink / raw)
To: ltp
----- Original Message -----
> OK this makes sense. We shouldn't be stress testing the system in this
> runtest file.
Pushed.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-06-09 13:33 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-09 10:55 [LTP] [PATCH] read_all: scale down how many times we read by default Jan Stancek
2020-06-09 12:27 ` Li Wang
2020-06-09 12:45 ` Richard Palethorpe
2020-06-09 13:33 ` Jan Stancek
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.