* [LTP] [PATCH 1/1] fzsync: Add sched_yield for single core machine
@ 2021-01-20 7:00 Leo Yu-Chi Liang
2021-01-20 10:00 ` Richard Palethorpe
0 siblings, 1 reply; 3+ messages in thread
From: Leo Yu-Chi Liang @ 2021-01-20 7:00 UTC (permalink / raw)
To: ltp
Fuzzy sync library uses spin waiting mechanism
to implement thread barrier behavior, which would
cause this test to be time-consuming on single core machine.
Fix this by adding sched_yield in the spin waiting loop,
so that the thread yields cpu as soon as it enters the waiting loop.
Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
---
include/tst_fuzzy_sync.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/include/tst_fuzzy_sync.h b/include/tst_fuzzy_sync.h
index 4141f5c64..64d172681 100644
--- a/include/tst_fuzzy_sync.h
+++ b/include/tst_fuzzy_sync.h
@@ -59,9 +59,11 @@
* @sa tst_fzsync_pair
*/
+#include <sys/sysinfo.h>
#include <sys/time.h>
#include <time.h>
#include <math.h>
+#include <sched.h>
#include <stdlib.h>
#include <pthread.h>
#include "tst_atomic.h"
@@ -564,6 +566,8 @@ static inline void tst_fzsync_pair_wait(int *our_cntr,
&& tst_atomic_load(our_cntr) < INT_MAX) {
if (spins)
(*spins)++;
+ if(get_nprocs() == 1)
+ sched_yield();
}
tst_atomic_store(0, other_cntr);
@@ -581,6 +585,8 @@ static inline void tst_fzsync_pair_wait(int *our_cntr,
while (tst_atomic_load(our_cntr) < tst_atomic_load(other_cntr)) {
if (spins)
(*spins)++;
+ if(get_nprocs() == 1)
+ sched_yield();
}
}
}
--
2.17.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [LTP] [PATCH 1/1] fzsync: Add sched_yield for single core machine
2021-01-20 7:00 [LTP] [PATCH 1/1] fzsync: Add sched_yield for single core machine Leo Yu-Chi Liang
@ 2021-01-20 10:00 ` Richard Palethorpe
2021-01-21 2:19 ` Leo Liang
0 siblings, 1 reply; 3+ messages in thread
From: Richard Palethorpe @ 2021-01-20 10:00 UTC (permalink / raw)
To: ltp
Hello Leo,
Leo Yu-Chi Liang <ycliang@andestech.com> writes:
> Fuzzy sync library uses spin waiting mechanism
> to implement thread barrier behavior, which would
> cause this test to be time-consuming on single core machine.
>
> Fix this by adding sched_yield in the spin waiting loop,
> so that the thread yields cpu as soon as it enters the waiting loop.
Thanks for sending this in. Comments below.
>
> Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
> ---
> include/tst_fuzzy_sync.h | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/include/tst_fuzzy_sync.h b/include/tst_fuzzy_sync.h
> index 4141f5c64..64d172681 100644
> --- a/include/tst_fuzzy_sync.h
> +++ b/include/tst_fuzzy_sync.h
> @@ -59,9 +59,11 @@
> * @sa tst_fzsync_pair
> */
>
> +#include <sys/sysinfo.h>
> #include <sys/time.h>
> #include <time.h>
> #include <math.h>
> +#include <sched.h>
> #include <stdlib.h>
> #include <pthread.h>
> #include "tst_atomic.h"
> @@ -564,6 +566,8 @@ static inline void tst_fzsync_pair_wait(int *our_cntr,
> && tst_atomic_load(our_cntr) < INT_MAX) {
> if (spins)
> (*spins)++;
> + if(get_nprocs() == 1)
We should use tst_ncpus() and then cache the value so we are not making
a function call within the loop. It is probably best to avoid calling
this function inside tst_fzsync_pair_wait, it may even result in a
system call.
We should probably cache the value in tst_fzsync_pair, maybe as a
boolean e.g. "yield_in_wait". This can be set/checked in the
tst_fzsync_pair_init function. Also this will allow the user to handle
CPUs being offlined if the test itself can cause that.
> + sched_yield();
> }
>
> tst_atomic_store(0, other_cntr);
> @@ -581,6 +585,8 @@ static inline void tst_fzsync_pair_wait(int *our_cntr,
> while (tst_atomic_load(our_cntr) < tst_atomic_load(other_cntr)) {
> if (spins)
> (*spins)++;
> + if(get_nprocs() == 1)
> + sched_yield();
> }
> }
> }
Everyone please note that we will have to test this extensively to
ensure it does break existing reproducers.
Alternatively to this approach we could create seperate implementations
of pair_wait and use a function pointer. I am thinking it may be best to
do it both ways and perform some measurements.
--
Thank you,
Richard.
^ permalink raw reply [flat|nested] 3+ messages in thread
* [LTP] [PATCH 1/1] fzsync: Add sched_yield for single core machine
2021-01-20 10:00 ` Richard Palethorpe
@ 2021-01-21 2:19 ` Leo Liang
0 siblings, 0 replies; 3+ messages in thread
From: Leo Liang @ 2021-01-21 2:19 UTC (permalink / raw)
To: ltp
On Wed, Jan 20, 2021 at 06:00:14PM +0800, Richard Palethorpe wrote:
> Hello Leo,
>
> Leo Yu-Chi Liang <ycliang@andestech.com> writes:
>
> > Fuzzy sync library uses spin waiting mechanism
> > to implement thread barrier behavior, which would
> > cause this test to be time-consuming on single core machine.
> >
> > Fix this by adding sched_yield in the spin waiting loop,
> > so that the thread yields cpu as soon as it enters the waiting loop.
>
> Thanks for sending this in. Comments below.
>
> >
> > Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
> > ---
> > include/tst_fuzzy_sync.h | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/include/tst_fuzzy_sync.h b/include/tst_fuzzy_sync.h
> > index 4141f5c64..64d172681 100644
> > --- a/include/tst_fuzzy_sync.h
> > +++ b/include/tst_fuzzy_sync.h
> > @@ -59,9 +59,11 @@
> > * @sa tst_fzsync_pair
> > */
> >
> > +#include <sys/sysinfo.h>
> > #include <sys/time.h>
> > #include <time.h>
> > #include <math.h>
> > +#include <sched.h>
> > #include <stdlib.h>
> > #include <pthread.h>
> > #include "tst_atomic.h"
> > @@ -564,6 +566,8 @@ static inline void tst_fzsync_pair_wait(int *our_cntr,
> > && tst_atomic_load(our_cntr) < INT_MAX) {
> > if (spins)
> > (*spins)++;
> > + if(get_nprocs() == 1)
>
> We should use tst_ncpus() and then cache the value so we are not making
> a function call within the loop. It is probably best to avoid calling
> this function inside tst_fzsync_pair_wait, it may even result in a
> system call.
>
> We should probably cache the value in tst_fzsync_pair, maybe as a
> boolean e.g. "yield_in_wait". This can be set/checked in the
> tst_fzsync_pair_init function. Also this will allow the user to handle
> CPUs being offlined if the test itself can cause that.
>
Got it! Thanks for reviewing the patch and all the heads-ups!
I will refine it and send a v2.
> > + sched_yield();
> > }
> >
> > tst_atomic_store(0, other_cntr);
> > @@ -581,6 +585,8 @@ static inline void tst_fzsync_pair_wait(int *our_cntr,
> > while (tst_atomic_load(our_cntr) < tst_atomic_load(other_cntr)) {
> > if (spins)
> > (*spins)++;
> > + if(get_nprocs() == 1)
> > + sched_yield();
> > }
> > }
> > }
>
> Everyone please note that we will have to test this extensively to
> ensure it does break existing reproducers.
>
Got it as well, will try to reproduce the cve with this patch applied.
Thanks again,
Leo
> Alternatively to this approach we could create seperate implementations
> of pair_wait and use a function pointer. I am thinking it may be best to
> do it both ways and perform some measurements.
>
> --
> Thank you,
> Richard.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-01-21 2:19 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-20 7:00 [LTP] [PATCH 1/1] fzsync: Add sched_yield for single core machine Leo Yu-Chi Liang
2021-01-20 10:00 ` Richard Palethorpe
2021-01-21 2:19 ` Leo Liang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.