All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] CodeSamples/count: change full memory barrier to partial one
@ 2023-04-05 16:56 Alan Huang
  2023-04-06  0:09 ` Akira Yokosawa
  0 siblings, 1 reply; 7+ messages in thread
From: Alan Huang @ 2023-04-05 16:56 UTC (permalink / raw)
  To: paulmck, akiyks; +Cc: perfbook, Alan Huang

The original memory barriers in count_stat_eventual.c ensure
writing to global_count happens before writing to stopflag and
reading from stopflag happens before later reading from global_count.

Thus, smp_load_acquire and smp_store_release will suffice.

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 CodeSamples/count/count_stat_eventual.c | 6 ++----
 count/count.tex                         | 6 +++---
 2 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
index 967644de..7157ee0e 100644
--- a/CodeSamples/count/count_stat_eventual.c
+++ b/CodeSamples/count/count_stat_eventual.c
@@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
 		WRITE_ONCE(global_count, sum);
 		poll(NULL, 0, 1);
 		if (READ_ONCE(stopflag)) {
-			smp_mb();
-			WRITE_ONCE(stopflag, stopflag + 1);
+			smp_store_release(&stopflag, stopflag + 1);
 		}
 	}
 	return NULL;
@@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
 void count_cleanup(void)				//\lnlbl{cleanup:b}
 {
 	WRITE_ONCE(stopflag, 1);
-	while (READ_ONCE(stopflag) < 3)
+	while (smp_load_acquire(&stopflag) < 3)
 		poll(NULL, 0, 1);
-	smp_mb();
 }							//\lnlbl{cleanup:e}
 //\end{snippet}
 
diff --git a/count/count.tex b/count/count.tex
index 80ada104..8ab67e2e 100644
--- a/count/count.tex
+++ b/count/count.tex
@@ -1013,9 +1013,9 @@ between passes.
 
 The \co{count_cleanup()} function on
 \clnrefrange{cleanup:b}{cleanup:e} coordinates termination.
-The calls to \co{smp_mb()} here and in \co{eventual()} ensure
-that all updates to \co{global_count} are visible to code following
-the call to \co{count_cleanup()}.
+The call to \co{smp_load_acquire()} here and the call to \co{smp_store_release()}
+in \co{eventual()} ensure that all updates to \co{global_count} are visible
+to code following the call to \co{count_cleanup()}.
 
 This approach gives extremely fast counter read-out while still
 supporting linear counter-update scalability.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] CodeSamples/count: change full memory barrier to partial one
  2023-04-05 16:56 [PATCH] CodeSamples/count: change full memory barrier to partial one Alan Huang
@ 2023-04-06  0:09 ` Akira Yokosawa
  2023-04-06 18:26   ` Paul E. McKenney
  0 siblings, 1 reply; 7+ messages in thread
From: Akira Yokosawa @ 2023-04-06  0:09 UTC (permalink / raw)
  To: Alan Huang, paulmck; +Cc: perfbook, Akira Yokosawa

On Wed,  5 Apr 2023 12:56:41 -0400, Alan Huang wrote:
> The original memory barriers in count_stat_eventual.c ensure
> writing to global_count happens before writing to stopflag and
> reading from stopflag happens before later reading from global_count.
> 
> Thus, smp_load_acquire and smp_store_release will suffice.
> 
> Signed-off-by: Alan Huang <mmpgouride@gmail.com>
> ---
>  CodeSamples/count/count_stat_eventual.c | 6 ++----
>  count/count.tex                         | 6 +++---
>  2 files changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
> index 967644de..7157ee0e 100644
> --- a/CodeSamples/count/count_stat_eventual.c
> +++ b/CodeSamples/count/count_stat_eventual.c
> @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
>  		WRITE_ONCE(global_count, sum);
>  		poll(NULL, 0, 1);
>  		if (READ_ONCE(stopflag)) {
> -			smp_mb();
> -			WRITE_ONCE(stopflag, stopflag + 1);
> +			smp_store_release(&stopflag, stopflag + 1);
>  		}
>  	}
>  	return NULL;
> @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
>  void count_cleanup(void)				//\lnlbl{cleanup:b}
>  {
>  	WRITE_ONCE(stopflag, 1);
> -	while (READ_ONCE(stopflag) < 3)
> +	while (smp_load_acquire(&stopflag) < 3)
>  		poll(NULL, 0, 1);
> -	smp_mb();
>  }							//\lnlbl{cleanup:e}
>  //\end{snippet}
>  
> diff --git a/count/count.tex b/count/count.tex
> index 80ada104..8ab67e2e 100644
> --- a/count/count.tex
> +++ b/count/count.tex
> @@ -1013,9 +1013,9 @@ between passes.
>  
>  The \co{count_cleanup()} function on
>  \clnrefrange{cleanup:b}{cleanup:e} coordinates termination.
> -The calls to \co{smp_mb()} here and in \co{eventual()} ensure
> -that all updates to \co{global_count} are visible to code following
> -the call to \co{count_cleanup()}.
> +The call to \co{smp_load_acquire()} here and the call to \co{smp_store_release()}
> +in \co{eventual()} ensure that all updates to \co{global_count} are visible
> +to code following the call to \co{count_cleanup()}.
>  
>  This approach gives extremely fast counter read-out while still
>  supporting linear counter-update scalability.

Looks good to me!  Thanks.

Reviewed-by: Akira Yokosawa <akiyks@gmail.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] CodeSamples/count: change full memory barrier to partial one
  2023-04-06  0:09 ` Akira Yokosawa
@ 2023-04-06 18:26   ` Paul E. McKenney
  0 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2023-04-06 18:26 UTC (permalink / raw)
  To: Akira Yokosawa; +Cc: Alan Huang, perfbook

On Thu, Apr 06, 2023 at 09:09:22AM +0900, Akira Yokosawa wrote:
> On Wed,  5 Apr 2023 12:56:41 -0400, Alan Huang wrote:
> > The original memory barriers in count_stat_eventual.c ensure
> > writing to global_count happens before writing to stopflag and
> > reading from stopflag happens before later reading from global_count.
> > 
> > Thus, smp_load_acquire and smp_store_release will suffice.
> > 
> > Signed-off-by: Alan Huang <mmpgouride@gmail.com>
> > ---
> >  CodeSamples/count/count_stat_eventual.c | 6 ++----
> >  count/count.tex                         | 6 +++---
> >  2 files changed, 5 insertions(+), 7 deletions(-)
> > 
> > diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
> > index 967644de..7157ee0e 100644
> > --- a/CodeSamples/count/count_stat_eventual.c
> > +++ b/CodeSamples/count/count_stat_eventual.c
> > @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
> >  		WRITE_ONCE(global_count, sum);
> >  		poll(NULL, 0, 1);
> >  		if (READ_ONCE(stopflag)) {
> > -			smp_mb();
> > -			WRITE_ONCE(stopflag, stopflag + 1);
> > +			smp_store_release(&stopflag, stopflag + 1);
> >  		}
> >  	}
> >  	return NULL;
> > @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
> >  void count_cleanup(void)				//\lnlbl{cleanup:b}
> >  {
> >  	WRITE_ONCE(stopflag, 1);
> > -	while (READ_ONCE(stopflag) < 3)
> > +	while (smp_load_acquire(&stopflag) < 3)
> >  		poll(NULL, 0, 1);
> > -	smp_mb();
> >  }							//\lnlbl{cleanup:e}
> >  //\end{snippet}
> >  
> > diff --git a/count/count.tex b/count/count.tex
> > index 80ada104..8ab67e2e 100644
> > --- a/count/count.tex
> > +++ b/count/count.tex
> > @@ -1013,9 +1013,9 @@ between passes.
> >  
> >  The \co{count_cleanup()} function on
> >  \clnrefrange{cleanup:b}{cleanup:e} coordinates termination.
> > -The calls to \co{smp_mb()} here and in \co{eventual()} ensure
> > -that all updates to \co{global_count} are visible to code following
> > -the call to \co{count_cleanup()}.
> > +The call to \co{smp_load_acquire()} here and the call to \co{smp_store_release()}
> > +in \co{eventual()} ensure that all updates to \co{global_count} are visible
> > +to code following the call to \co{count_cleanup()}.
> >  
> >  This approach gives extremely fast counter read-out while still
> >  supporting linear counter-update scalability.
> 
> Looks good to me!  Thanks.
> 
> Reviewed-by: Akira Yokosawa <akiyks@gmail.com>

Queued and pushed with the inevitable wordsmithing, thank you both!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] CodeSamples/count: change full memory barrier to partial one
  2023-04-04 11:28 ` Akira Yokosawa
  2023-04-04 15:07   ` Alan Huang
@ 2023-04-04 15:14   ` Alan Huang
  1 sibling, 0 replies; 7+ messages in thread
From: Alan Huang @ 2023-04-04 15:14 UTC (permalink / raw)
  To: Akira Yokosawa; +Cc: perfbook, paulmck

Hi Akira,

Thanks for your advice!

I’ll try to fix that.

Thanks,
Alan

> 2023年4月4日 下午7:28,Akira Yokosawa <akiyks@gmail.com> 写道:
> 
> Hi Alan,
> 
> Thanks for your continued efforts.
> Unfortunately, I (as a LaTeX adviser) see a couple of problems.
> Please see below.
> 
> On Tue,  4 Apr 2023 01:19:21 -0400, Alan Huang wrote:
>> The original memory barriers in count_stat_eventual.c ensure
>> writing to global_count happens before writing to stopflag and
>> reading from stopflag happens before later reading from global_count.
>> 
>> Thus, smp_load_acquire and smp_store_release will suffice.
>> 
>> In count_lim_sig.c, there is only one ordering required, that is
>> writing to counter happens before setting theft to THEFT_READY in
>> add_count/sub_count's fast path. Therefore, partial memory barrier
>> will suffice.
>> 
>> Signed-off-by: Alan Huang <mmpgouride@gmail.com>
>> ---
>> CodeSamples/count/count_lim_sig.c       | 10 +++-------
>> CodeSamples/count/count_stat_eventual.c |  6 ++----
>> 2 files changed, 5 insertions(+), 11 deletions(-)
>> 
>> diff --git a/CodeSamples/count/count_lim_sig.c b/CodeSamples/count/count_lim_sig.c
>> index 59da8077..c2f61197 100644
>> --- a/CodeSamples/count/count_lim_sig.c
>> +++ b/CodeSamples/count/count_lim_sig.c
>> @@ -56,12 +56,10 @@ static void flush_local_count_sig(int unused)	//\lnlbl{flush_sig:b}
>> {
>> 	if (READ_ONCE(theft) != THEFT_REQ)	//\lnlbl{flush_sig:check:REQ}
>> 		return;				//\lnlbl{flush_sig:return:n}
>> -	smp_mb();				//\lnlbl{flush_sig:mb:1}
> 
> Removing a line that has //\lnlbl{foo} from a code snippet will cause
> warnings of "Reference ... undefined on ...".
> 
> I think you will see warnings that look like:
> 
>    ----- Excerpt around remaining warning messages -----
>    [63]
>    <count/sig-theft.pdf, id=5442, 172.645pt x 234.8775pt>
>    File: count/sig-theft.pdf Graphic file (type pdf)
>    <use count/sig-theft.pdf>
>    Package pdftex.def Info: count/sig-theft.pdf  used on input line 2303.
>    (pdftex.def)             Requested size: 172.64456pt x 234.87692pt.
>     [64 <./count/sig-theft.pdf>] (./CodeSamples/count/count_lim_sig@data.fcv)
> 
>    LaTeX Warning: Reference `ln:count:count_lim_sig:migration:flush_sig:mb:1' on p
>    age 65 undefined on input line 2428.
> 
>    [...]
> 
> You need to update text-side references to the to-be-removed label.
> You will likely need to adjust wording in surrounding text as well.
> 
> If you are wondering how these labels in code snippets can be
> referenced from LaTeX sources, please see Appendix D.3.1.1.
> 
>> 	WRITE_ONCE(theft, THEFT_ACK);		//\lnlbl{flush_sig:set:ACK}
>> 	if (!counting) {			//\lnlbl{flush_sig:check:fast}
>> -		WRITE_ONCE(theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
>> +		smp_store_release(&theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
>> 	}
>> -	smp_mb();
>> }						//\lnlbl{flush_sig:e}
>> 
>> static void flush_local_count(void)			//\lnlbl{flush:b}
>> @@ -125,8 +123,7 @@ int add_count(unsigned long delta)			//\lnlbl{b}
>> 	WRITE_ONCE(counting, 0);			//\lnlbl{clearcnt}
>> 	barrier();					//\lnlbl{barrier:3}
>> 	if (READ_ONCE(theft) == THEFT_ACK) {		//\lnlbl{check:ACK}
>> -		smp_mb();				//\lnlbl{mb}
> 
> Ditto.
> 
> Can you submit a v2 including text-side updates?
> 
> In the meantime, I'll review your code changes.
> 
>        Thanks, Akira
> 
> 
>> -		WRITE_ONCE(theft, THEFT_READY);		//\lnlbl{READY}
>> +		smp_store_release(&theft, THEFT_READY);		//\lnlbl{READY}
>> 	}
>> 	if (fastpath)
>> 		return 1;				//\lnlbl{return:fs}
>> @@ -164,8 +161,7 @@ int sub_count(unsigned long delta)
>> 	WRITE_ONCE(counting, 0);
>> 	barrier();
>> 	if (READ_ONCE(theft) == THEFT_ACK) {
>> -		smp_mb();
>> -		WRITE_ONCE(theft, THEFT_READY);
>> +		smp_store_release(&theft, THEFT_READY);
>> 	}
>> 	if (fastpath)
>> 		return 1;
>> diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
>> index 967644de..7157ee0e 100644
>> --- a/CodeSamples/count/count_stat_eventual.c
>> +++ b/CodeSamples/count/count_stat_eventual.c
>> @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
>> 		WRITE_ONCE(global_count, sum);
>> 		poll(NULL, 0, 1);
>> 		if (READ_ONCE(stopflag)) {
>> -			smp_mb();
>> -			WRITE_ONCE(stopflag, stopflag + 1);
>> +			smp_store_release(&stopflag, stopflag + 1);
>> 		}
>> 	}
>> 	return NULL;
>> @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
>> void count_cleanup(void)				//\lnlbl{cleanup:b}
>> {
>> 	WRITE_ONCE(stopflag, 1);
>> -	while (READ_ONCE(stopflag) < 3)
>> +	while (smp_load_acquire(&stopflag) < 3)
>> 		poll(NULL, 0, 1);
>> -	smp_mb();
>> }							//\lnlbl{cleanup:e}
>> //\end{snippet}


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] CodeSamples/count: change full memory barrier to partial one
  2023-04-04 11:28 ` Akira Yokosawa
@ 2023-04-04 15:07   ` Alan Huang
  2023-04-04 15:14   ` Alan Huang
  1 sibling, 0 replies; 7+ messages in thread
From: Alan Huang @ 2023-04-04 15:07 UTC (permalink / raw)
  To: Akira Yokosawa; +Cc: perfbook, paulmck

[-- Attachment #1: Type: text/plain, Size: 4863 bytes --]

Hi Akira,

Thanks for your advice!

I’ll try to fix that.

Thanks,
Alan

> 2023年4月4日 下午7:28,Akira Yokosawa <akiyks@gmail.com> 写道:
> 
> Hi Alan,
> 
> Thanks for your continued efforts.
> Unfortunately, I (as a LaTeX adviser) see a couple of problems.
> Please see below.
> 
> On Tue,  4 Apr 2023 01:19:21 -0400, Alan Huang wrote:
>> The original memory barriers in count_stat_eventual.c ensure
>> writing to global_count happens before writing to stopflag and
>> reading from stopflag happens before later reading from global_count.
>> 
>> Thus, smp_load_acquire and smp_store_release will suffice.
>> 
>> In count_lim_sig.c, there is only one ordering required, that is
>> writing to counter happens before setting theft to THEFT_READY in
>> add_count/sub_count's fast path. Therefore, partial memory barrier
>> will suffice.
>> 
>> Signed-off-by: Alan Huang <mmpgouride@gmail.com>
>> ---
>> CodeSamples/count/count_lim_sig.c       | 10 +++-------
>> CodeSamples/count/count_stat_eventual.c |  6 ++----
>> 2 files changed, 5 insertions(+), 11 deletions(-)
>> 
>> diff --git a/CodeSamples/count/count_lim_sig.c b/CodeSamples/count/count_lim_sig.c
>> index 59da8077..c2f61197 100644
>> --- a/CodeSamples/count/count_lim_sig.c
>> +++ b/CodeSamples/count/count_lim_sig.c
>> @@ -56,12 +56,10 @@ static void flush_local_count_sig(int unused)	//\lnlbl{flush_sig:b}
>> {
>> 	if (READ_ONCE(theft) != THEFT_REQ)	//\lnlbl{flush_sig:check:REQ}
>> 		return;				//\lnlbl{flush_sig:return:n}
>> -	smp_mb();				//\lnlbl{flush_sig:mb:1}
> 
> Removing a line that has //\lnlbl{foo} from a code snippet will cause
> warnings of "Reference ... undefined on ...".
> 
> I think you will see warnings that look like:
> 
>    ----- Excerpt around remaining warning messages -----
>    [63]
>    <count/sig-theft.pdf, id=5442, 172.645pt x 234.8775pt>
>    File: count/sig-theft.pdf Graphic file (type pdf)
>    <use count/sig-theft.pdf>
>    Package pdftex.def Info: count/sig-theft.pdf  used on input line 2303.
>    (pdftex.def)             Requested size: 172.64456pt x 234.87692pt.
>     [64 <./count/sig-theft.pdf>] (./CodeSamples/count/count_lim_sig@data.fcv <mailto:CodeSamples/count/count_lim_sig@data.fcv>)
> 
>    LaTeX Warning: Reference `ln:count:count_lim_sig:migration:flush_sig:mb:1' on p
>    age 65 undefined on input line 2428.
> 
>    [...]
> 
> You need to update text-side references to the to-be-removed label.
> You will likely need to adjust wording in surrounding text as well.
> 
> If you are wondering how these labels in code snippets can be
> referenced from LaTeX sources, please see Appendix D.3.1.1.
> 
>> 	WRITE_ONCE(theft, THEFT_ACK);		//\lnlbl{flush_sig:set:ACK}
>> 	if (!counting) {			//\lnlbl{flush_sig:check:fast}
>> -		WRITE_ONCE(theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
>> +		smp_store_release(&theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
>> 	}
>> -	smp_mb();
>> }						//\lnlbl{flush_sig:e}
>> 
>> static void flush_local_count(void)			//\lnlbl{flush:b}
>> @@ -125,8 +123,7 @@ int add_count(unsigned long delta)			//\lnlbl{b}
>> 	WRITE_ONCE(counting, 0);			//\lnlbl{clearcnt}
>> 	barrier();					//\lnlbl{barrier:3}
>> 	if (READ_ONCE(theft) == THEFT_ACK) {		//\lnlbl{check:ACK}
>> -		smp_mb();				//\lnlbl{mb}
> 
> Ditto.
> 
> Can you submit a v2 including text-side updates?
> 
> In the meantime, I'll review your code changes.
> 
>        Thanks, Akira
> 
> 
>> -		WRITE_ONCE(theft, THEFT_READY);		//\lnlbl{READY}
>> +		smp_store_release(&theft, THEFT_READY);		//\lnlbl{READY}
>> 	}
>> 	if (fastpath)
>> 		return 1;				//\lnlbl{return:fs}
>> @@ -164,8 +161,7 @@ int sub_count(unsigned long delta)
>> 	WRITE_ONCE(counting, 0);
>> 	barrier();
>> 	if (READ_ONCE(theft) == THEFT_ACK) {
>> -		smp_mb();
>> -		WRITE_ONCE(theft, THEFT_READY);
>> +		smp_store_release(&theft, THEFT_READY);
>> 	}
>> 	if (fastpath)
>> 		return 1;
>> diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
>> index 967644de..7157ee0e 100644
>> --- a/CodeSamples/count/count_stat_eventual.c
>> +++ b/CodeSamples/count/count_stat_eventual.c
>> @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
>> 		WRITE_ONCE(global_count, sum);
>> 		poll(NULL, 0, 1);
>> 		if (READ_ONCE(stopflag)) {
>> -			smp_mb();
>> -			WRITE_ONCE(stopflag, stopflag + 1);
>> +			smp_store_release(&stopflag, stopflag + 1);
>> 		}
>> 	}
>> 	return NULL;
>> @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
>> void count_cleanup(void)				//\lnlbl{cleanup:b}
>> {
>> 	WRITE_ONCE(stopflag, 1);
>> -	while (READ_ONCE(stopflag) < 3)
>> +	while (smp_load_acquire(&stopflag) < 3)
>> 		poll(NULL, 0, 1);
>> -	smp_mb();
>> }							//\lnlbl{cleanup:e}
>> //\end{snippet}


[-- Attachment #2: Type: text/html, Size: 41386 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] CodeSamples/count: change full memory barrier to partial one
  2023-04-04  5:19 Alan Huang
@ 2023-04-04 11:28 ` Akira Yokosawa
  2023-04-04 15:07   ` Alan Huang
  2023-04-04 15:14   ` Alan Huang
  0 siblings, 2 replies; 7+ messages in thread
From: Akira Yokosawa @ 2023-04-04 11:28 UTC (permalink / raw)
  To: Alan Huang; +Cc: perfbook, paulmck, Akira Yokosawa

Hi Alan,

Thanks for your continued efforts.
Unfortunately, I (as a LaTeX adviser) see a couple of problems.
Please see below.

On Tue,  4 Apr 2023 01:19:21 -0400, Alan Huang wrote:
> The original memory barriers in count_stat_eventual.c ensure
> writing to global_count happens before writing to stopflag and
> reading from stopflag happens before later reading from global_count.
> 
> Thus, smp_load_acquire and smp_store_release will suffice.
> 
> In count_lim_sig.c, there is only one ordering required, that is
> writing to counter happens before setting theft to THEFT_READY in
> add_count/sub_count's fast path. Therefore, partial memory barrier
> will suffice.
> 
> Signed-off-by: Alan Huang <mmpgouride@gmail.com>
> ---
>  CodeSamples/count/count_lim_sig.c       | 10 +++-------
>  CodeSamples/count/count_stat_eventual.c |  6 ++----
>  2 files changed, 5 insertions(+), 11 deletions(-)
> 
> diff --git a/CodeSamples/count/count_lim_sig.c b/CodeSamples/count/count_lim_sig.c
> index 59da8077..c2f61197 100644
> --- a/CodeSamples/count/count_lim_sig.c
> +++ b/CodeSamples/count/count_lim_sig.c
> @@ -56,12 +56,10 @@ static void flush_local_count_sig(int unused)	//\lnlbl{flush_sig:b}
>  {
>  	if (READ_ONCE(theft) != THEFT_REQ)	//\lnlbl{flush_sig:check:REQ}
>  		return;				//\lnlbl{flush_sig:return:n}
> -	smp_mb();				//\lnlbl{flush_sig:mb:1}

Removing a line that has //\lnlbl{foo} from a code snippet will cause
warnings of "Reference ... undefined on ...".

I think you will see warnings that look like:

    ----- Excerpt around remaining warning messages -----
    [63]
    <count/sig-theft.pdf, id=5442, 172.645pt x 234.8775pt>
    File: count/sig-theft.pdf Graphic file (type pdf)
    <use count/sig-theft.pdf>
    Package pdftex.def Info: count/sig-theft.pdf  used on input line 2303.
    (pdftex.def)             Requested size: 172.64456pt x 234.87692pt.
     [64 <./count/sig-theft.pdf>] (./CodeSamples/count/count_lim_sig@data.fcv)

    LaTeX Warning: Reference `ln:count:count_lim_sig:migration:flush_sig:mb:1' on p
    age 65 undefined on input line 2428.

    [...]

You need to update text-side references to the to-be-removed label.
You will likely need to adjust wording in surrounding text as well.

If you are wondering how these labels in code snippets can be
referenced from LaTeX sources, please see Appendix D.3.1.1.

>  	WRITE_ONCE(theft, THEFT_ACK);		//\lnlbl{flush_sig:set:ACK}
>  	if (!counting) {			//\lnlbl{flush_sig:check:fast}
> -		WRITE_ONCE(theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
> +		smp_store_release(&theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
>  	}
> -	smp_mb();
>  }						//\lnlbl{flush_sig:e}
>  
>  static void flush_local_count(void)			//\lnlbl{flush:b}
> @@ -125,8 +123,7 @@ int add_count(unsigned long delta)			//\lnlbl{b}
>  	WRITE_ONCE(counting, 0);			//\lnlbl{clearcnt}
>  	barrier();					//\lnlbl{barrier:3}
>  	if (READ_ONCE(theft) == THEFT_ACK) {		//\lnlbl{check:ACK}
> -		smp_mb();				//\lnlbl{mb}

Ditto.

Can you submit a v2 including text-side updates?

In the meantime, I'll review your code changes.

        Thanks, Akira


> -		WRITE_ONCE(theft, THEFT_READY);		//\lnlbl{READY}
> +		smp_store_release(&theft, THEFT_READY);		//\lnlbl{READY}
>  	}
>  	if (fastpath)
>  		return 1;				//\lnlbl{return:fs}
> @@ -164,8 +161,7 @@ int sub_count(unsigned long delta)
>  	WRITE_ONCE(counting, 0);
>  	barrier();
>  	if (READ_ONCE(theft) == THEFT_ACK) {
> -		smp_mb();
> -		WRITE_ONCE(theft, THEFT_READY);
> +		smp_store_release(&theft, THEFT_READY);
>  	}
>  	if (fastpath)
>  		return 1;
> diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
> index 967644de..7157ee0e 100644
> --- a/CodeSamples/count/count_stat_eventual.c
> +++ b/CodeSamples/count/count_stat_eventual.c
> @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
>  		WRITE_ONCE(global_count, sum);
>  		poll(NULL, 0, 1);
>  		if (READ_ONCE(stopflag)) {
> -			smp_mb();
> -			WRITE_ONCE(stopflag, stopflag + 1);
> +			smp_store_release(&stopflag, stopflag + 1);
>  		}
>  	}
>  	return NULL;
> @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
>  void count_cleanup(void)				//\lnlbl{cleanup:b}
>  {
>  	WRITE_ONCE(stopflag, 1);
> -	while (READ_ONCE(stopflag) < 3)
> +	while (smp_load_acquire(&stopflag) < 3)
>  		poll(NULL, 0, 1);
> -	smp_mb();
>  }							//\lnlbl{cleanup:e}
>  //\end{snippet}
>  

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] CodeSamples/count: change full memory barrier to partial one
@ 2023-04-04  5:19 Alan Huang
  2023-04-04 11:28 ` Akira Yokosawa
  0 siblings, 1 reply; 7+ messages in thread
From: Alan Huang @ 2023-04-04  5:19 UTC (permalink / raw)
  To: paulmck; +Cc: perfbook, Alan Huang

The original memory barriers in count_stat_eventual.c ensure
writing to global_count happens before writing to stopflag and
reading from stopflag happens before later reading from global_count.

Thus, smp_load_acquire and smp_store_release will suffice.

In count_lim_sig.c, there is only one ordering required, that is
writing to counter happens before setting theft to THEFT_READY in
add_count/sub_count's fast path. Therefore, partial memory barrier
will suffice.

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 CodeSamples/count/count_lim_sig.c       | 10 +++-------
 CodeSamples/count/count_stat_eventual.c |  6 ++----
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/CodeSamples/count/count_lim_sig.c b/CodeSamples/count/count_lim_sig.c
index 59da8077..c2f61197 100644
--- a/CodeSamples/count/count_lim_sig.c
+++ b/CodeSamples/count/count_lim_sig.c
@@ -56,12 +56,10 @@ static void flush_local_count_sig(int unused)	//\lnlbl{flush_sig:b}
 {
 	if (READ_ONCE(theft) != THEFT_REQ)	//\lnlbl{flush_sig:check:REQ}
 		return;				//\lnlbl{flush_sig:return:n}
-	smp_mb();				//\lnlbl{flush_sig:mb:1}
 	WRITE_ONCE(theft, THEFT_ACK);		//\lnlbl{flush_sig:set:ACK}
 	if (!counting) {			//\lnlbl{flush_sig:check:fast}
-		WRITE_ONCE(theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
+		smp_store_release(&theft, THEFT_READY);	//\lnlbl{flush_sig:set:READY}
 	}
-	smp_mb();
 }						//\lnlbl{flush_sig:e}
 
 static void flush_local_count(void)			//\lnlbl{flush:b}
@@ -125,8 +123,7 @@ int add_count(unsigned long delta)			//\lnlbl{b}
 	WRITE_ONCE(counting, 0);			//\lnlbl{clearcnt}
 	barrier();					//\lnlbl{barrier:3}
 	if (READ_ONCE(theft) == THEFT_ACK) {		//\lnlbl{check:ACK}
-		smp_mb();				//\lnlbl{mb}
-		WRITE_ONCE(theft, THEFT_READY);		//\lnlbl{READY}
+		smp_store_release(&theft, THEFT_READY);		//\lnlbl{READY}
 	}
 	if (fastpath)
 		return 1;				//\lnlbl{return:fs}
@@ -164,8 +161,7 @@ int sub_count(unsigned long delta)
 	WRITE_ONCE(counting, 0);
 	barrier();
 	if (READ_ONCE(theft) == THEFT_ACK) {
-		smp_mb();
-		WRITE_ONCE(theft, THEFT_READY);
+		smp_store_release(&theft, THEFT_READY);
 	}
 	if (fastpath)
 		return 1;
diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
index 967644de..7157ee0e 100644
--- a/CodeSamples/count/count_stat_eventual.c
+++ b/CodeSamples/count/count_stat_eventual.c
@@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
 		WRITE_ONCE(global_count, sum);
 		poll(NULL, 0, 1);
 		if (READ_ONCE(stopflag)) {
-			smp_mb();
-			WRITE_ONCE(stopflag, stopflag + 1);
+			smp_store_release(&stopflag, stopflag + 1);
 		}
 	}
 	return NULL;
@@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
 void count_cleanup(void)				//\lnlbl{cleanup:b}
 {
 	WRITE_ONCE(stopflag, 1);
-	while (READ_ONCE(stopflag) < 3)
+	while (smp_load_acquire(&stopflag) < 3)
 		poll(NULL, 0, 1);
-	smp_mb();
 }							//\lnlbl{cleanup:e}
 //\end{snippet}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-04-06 18:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-05 16:56 [PATCH] CodeSamples/count: change full memory barrier to partial one Alan Huang
2023-04-06  0:09 ` Akira Yokosawa
2023-04-06 18:26   ` Paul E. McKenney
  -- strict thread matches above, loose matches on Subject: below --
2023-04-04  5:19 Alan Huang
2023-04-04 11:28 ` Akira Yokosawa
2023-04-04 15:07   ` Alan Huang
2023-04-04 15:14   ` Alan Huang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.