linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline
@ 2019-06-11 15:47 Anders Roxell
  2019-06-23 11:16 ` Thomas Gleixner
  0 siblings, 1 reply; 3+ messages in thread
From: Anders Roxell @ 2019-06-11 15:47 UTC (permalink / raw)
  To: peterz, mingo, will.deacon; +Cc: tglx, linux-kernel, Anders Roxell

With the function graph tracer, each traced function calls sched_clock()
to take a timestamp. As sched_clock() uses
raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
do not in turn trigger the graph tracer.
Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
is set that may make the two functions tracable which they shouldn't.

Rework so that functions raw_read_seqcount and read_seqcount_retry are
marked with __always_inline so they will be inlined even if
CONFIG_OPTIMIZE_INLINING is turned on.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
---
 include/linux/seqlock.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index bcf4cf26b8c8..1b18e3df186e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -127,7 +127,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
  * seqcount without any lockdep checking and without checking or
  * masking the LSB. Calling code is responsible for handling that.
  */
-static inline unsigned raw_read_seqcount(const seqcount_t *s)
+static __always_inline unsigned raw_read_seqcount(const seqcount_t *s)
 {
 	unsigned ret = READ_ONCE(s->sequence);
 	smp_rmb();
@@ -215,7 +215,8 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
  * If the critical section was invalid, it must be ignored (and typically
  * retried).
  */
-static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+static
+__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
 {
 	smp_rmb();
 	return __read_seqcount_retry(s, start);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline
  2019-06-11 15:47 [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline Anders Roxell
@ 2019-06-23 11:16 ` Thomas Gleixner
  2019-07-26 10:01   ` Anders Roxell
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Gleixner @ 2019-06-23 11:16 UTC (permalink / raw)
  To: Anders Roxell; +Cc: peterz, mingo, will.deacon, linux-kernel

On Tue, 11 Jun 2019, Anders Roxell wrote:

> With the function graph tracer, each traced function calls sched_clock()
> to take a timestamp. As sched_clock() uses
> raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
> do not in turn trigger the graph tracer.
> Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
> is set that may make the two functions tracable which they shouldn't.
> 
> Rework so that functions raw_read_seqcount and read_seqcount_retry are
> marked with __always_inline so they will be inlined even if
> CONFIG_OPTIMIZE_INLINING is turned on.

Why just those two? The same issue can happen in other places with other
clocks which can be utilized by the tracer.

Aside of your particular issue, there is no reason why any of those
functions should ever trigger a graph.

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline
  2019-06-23 11:16 ` Thomas Gleixner
@ 2019-07-26 10:01   ` Anders Roxell
  0 siblings, 0 replies; 3+ messages in thread
From: Anders Roxell @ 2019-07-26 10:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Linux Kernel Mailing List

On Sun, 23 Jun 2019 at 13:16, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Tue, 11 Jun 2019, Anders Roxell wrote:
>
> > With the function graph tracer, each traced function calls sched_clock()
> > to take a timestamp. As sched_clock() uses
> > raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
> > do not in turn trigger the graph tracer.
> > Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
> > is set that may make the two functions tracable which they shouldn't.
> >
> > Rework so that functions raw_read_seqcount and read_seqcount_retry are
> > marked with __always_inline so they will be inlined even if
> > CONFIG_OPTIMIZE_INLINING is turned on.
>
> Why just those two? The same issue can happen in other places with other
> clocks which can be utilized by the tracer.
>
> Aside of your particular issue, there is no reason why any of those
> functions should ever trigger a graph.


Yes you are correct. I'll update the patch with __always_inline to all functions
in that file.

Cheers,
Anders

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-07-26 10:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-11 15:47 [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline Anders Roxell
2019-06-23 11:16 ` Thomas Gleixner
2019-07-26 10:01   ` Anders Roxell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).