From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> To: Peter Zijlstra <peterz@infradead.org> Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>, Boqun Feng <boqun.feng@gmail.com>, Liam Girdwood <lgirdwood@gmail.com>, Mark Brown <broonie@kernel.org>, linux-kernel@vger.kernel.org, Daniel Vetter <daniel.vetter@ffwll.ch> Subject: Re: [PATCH v2] kernel/locking: Add context to ww_mutex_trylock. Date: Mon, 13 Sep 2021 10:42:36 +0200 [thread overview] Message-ID: <e8a7754e-23e7-0250-5718-101a56d008f0@linux.intel.com> (raw) In-Reply-To: <YTtznr85mg5xXouP@hirez.programming.kicks-ass.net> Op 10-09-2021 om 17:02 schreef Peter Zijlstra: > On Thu, Sep 09, 2021 at 11:32:18AM +0200, Maarten Lankhorst wrote: >> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c >> index d456579d0952..791c28005eef 100644 >> --- a/kernel/locking/mutex.c >> +++ b/kernel/locking/mutex.c >> @@ -736,6 +736,44 @@ __ww_mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass, >> return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true); >> } >> >> +/** >> + * ww_mutex_trylock - tries to acquire the w/w mutex with optional acquire context >> + * @lock: mutex to lock >> + * @ctx: optional w/w acquire context >> + * >> + * Trylocks a mutex with the optional acquire context; no deadlock detection is >> + * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise. >> + * >> + * Unlike ww_mutex_lock, no deadlock handling is performed. However, if a @ctx is >> + * specified, -EALREADY and -EDEADLK handling may happen in calls to ww_mutex_lock. >> + * >> + * A mutex acquired with this function must be released with ww_mutex_unlock. >> + */ >> +int __sched >> +ww_mutex_trylock(struct ww_mutex *ww, struct ww_acquire_ctx *ctx) >> +{ >> + bool locked; >> + >> + if (!ctx) >> + return mutex_trylock(&ww->base); >> + >> +#ifdef CONFIG_DEBUG_MUTEXES >> + DEBUG_LOCKS_WARN_ON(ww->base.magic != &ww->base); >> +#endif >> + >> + preempt_disable(); >> + locked = __mutex_trylock(&ww->base); >> + >> + if (locked) { >> + ww_mutex_set_context_fastpath(ww, ctx); >> + mutex_acquire_nest(&ww->base.dep_map, 0, 1, &ctx->dep_map, _RET_IP_); >> + } >> + preempt_enable(); >> + >> + return locked; >> +} >> +EXPORT_SYMBOL(ww_mutex_trylock); >> + >> #ifdef CONFIG_DEBUG_LOCK_ALLOC >> void __sched >> mutex_lock_nested(struct mutex *lock, unsigned int subclass) >> diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c >> index 3f1fff7d2780..c4cb863edb4c 100644 >> --- a/kernel/locking/ww_rt_mutex.c >> +++ b/kernel/locking/ww_rt_mutex.c >> @@ -50,6 +50,18 @@ __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx, >> return ret; >> } >> >> +int __sched >> +ww_mutex_trylock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) >> +{ >> + int locked = rt_mutex_trylock(&lock->base); >> + >> + if (locked && ctx) >> + ww_mutex_set_context_fastpath(lock, ctx); >> + >> + return locked; >> +} >> +EXPORT_SYMBOL(ww_mutex_trylock); >> + >> int __sched >> ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) >> { > That doesn't look right, how's this for you? > > --- > --- a/kernel/locking/mutex.c > +++ b/kernel/locking/mutex.c > @@ -94,6 +94,9 @@ static inline unsigned long __owner_flag > return owner & MUTEX_FLAGS; > } > > +/* > + * Returns: __mutex_owner(lock) on failure or NULL on success. > + */ > static inline struct task_struct *__mutex_trylock_common(struct mutex *lock, bool handoff) > { > unsigned long owner, curr = (unsigned long)current; > @@ -736,6 +739,47 @@ __ww_mutex_lock(struct mutex *lock, unsi > return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true); > } > > +/** > + * ww_mutex_trylock - tries to acquire the w/w mutex with optional acquire context > + * @ww: mutex to lock > + * @ww_ctx: optional w/w acquire context > + * > + * Trylocks a mutex with the optional acquire context; no deadlock detection is > + * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise. > + * > + * Unlike ww_mutex_lock, no deadlock handling is performed. However, if a @ctx is > + * specified, -EALREADY handling may happen in calls to ww_mutex_trylock. > + * > + * A mutex acquired with this function must be released with ww_mutex_unlock. > + */ > +int ww_mutex_trylock(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx) > +{ > + if (!ww_ctx) > + return mutex_trylock(&ww->base); > + > + MUTEX_WARN_ON(ww->base.magic != &ww->base); > + > + if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) > + return -EALREADY; I'm not 100% sure this is a good idea, because it would make the trylock weird. For i915 I checked manually, because I didn't want to change the function signature. This is probably the other extreme. "if (ww_mutex_trylock())" would look correct, but actually be wrong and lead to double unlock without adjustments. Maybe we could make a ww_mutex_trylock_ctx_err, which would return -EALREADY or -EBUSY on failure, and 0 on success? We could keep ww_mutex_trylock without ctx, probably just #define as (!ww_mutex_trylock_ctx_err(lock, NULL)) > + /* > + * Reset the wounded flag after a kill. No other process can > + * race and wound us here, since they can't have a valid owner > + * pointer if we don't have any locks held. > + */ > + if (ww_ctx->acquired == 0) > + ww_ctx->wounded = 0; Yeah I guess this needs fixing too. Not completely sure since trylock wouldn't do the whole ww dance, but since it's our first lock, probably best to do so regardless so other users don't trip over it. > + > + if (__mutex_trylock(&ww->base)) { > + ww_mutex_set_context_fastpath(ww, ww_ctx); > + mutex_acquire_nest(&ww->base.dep_map, 0, 1, &ww_ctx->dep_map, _RET_IP_); > + return 1; > + } > + > + return 0; > +} > +EXPORT_SYMBOL(ww_mutex_trylock); > + > #ifdef CONFIG_DEBUG_LOCK_ALLOC > void __sched > mutex_lock_nested(struct mutex *lock, unsigned int subclass) > --- a/kernel/locking/ww_rt_mutex.c > +++ b/kernel/locking/ww_rt_mutex.c > @@ -9,6 +9,34 @@ > #define WW_RT > #include "rtmutex.c" > > +int ww_mutex_trylock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx) > +{ > + struct rt_mutex *rtm = &lock->base; > + > + if (!ww_ctx) > + return rt_mutex_trylock(rtm); > + > + if (unlikely(ww_ctx == READ_ONCE(lock->ctx))) > + return -EALREADY; > + > + /* > + * Reset the wounded flag after a kill. No other process can > + * race and wound us here, since they can't have a valid owner > + * pointer if we don't have any locks held. > + */ > + if (ww_ctx->acquired == 0) > + ww_ctx->wounded = 0; > + > + if (__rt_mutex_trylock(&rtm->rtmutex)) { > + ww_mutex_set_context_fastpath(lock, ww_ctx); > + mutex_acquire_nest(&rtm->dep_map, 0, 1, ww_ctx->dep_map, _RET_IP_); > + return 1; > + } > + > + return 0; > +} > +EXPORT_SYMBOL(ww_mutex_trylock); > + > static int __sched > __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx, > unsigned int state, unsigned long ip)
WARNING: multiple messages have this Message-ID (diff)
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> To: Peter Zijlstra <peterz@infradead.org> Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>, Boqun Feng <boqun.feng@gmail.com>, Liam Girdwood <lgirdwood@gmail.com>, Mark Brown <broonie@kernel.org>, linux-kernel@vger.kernel.org, Daniel Vetter <daniel.vetter@ffwll.ch> Subject: Re: [Intel-gfx] [PATCH v2] kernel/locking: Add context to ww_mutex_trylock. Date: Mon, 13 Sep 2021 10:42:36 +0200 [thread overview] Message-ID: <e8a7754e-23e7-0250-5718-101a56d008f0@linux.intel.com> (raw) In-Reply-To: <YTtznr85mg5xXouP@hirez.programming.kicks-ass.net> Op 10-09-2021 om 17:02 schreef Peter Zijlstra: > On Thu, Sep 09, 2021 at 11:32:18AM +0200, Maarten Lankhorst wrote: >> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c >> index d456579d0952..791c28005eef 100644 >> --- a/kernel/locking/mutex.c >> +++ b/kernel/locking/mutex.c >> @@ -736,6 +736,44 @@ __ww_mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass, >> return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true); >> } >> >> +/** >> + * ww_mutex_trylock - tries to acquire the w/w mutex with optional acquire context >> + * @lock: mutex to lock >> + * @ctx: optional w/w acquire context >> + * >> + * Trylocks a mutex with the optional acquire context; no deadlock detection is >> + * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise. >> + * >> + * Unlike ww_mutex_lock, no deadlock handling is performed. However, if a @ctx is >> + * specified, -EALREADY and -EDEADLK handling may happen in calls to ww_mutex_lock. >> + * >> + * A mutex acquired with this function must be released with ww_mutex_unlock. >> + */ >> +int __sched >> +ww_mutex_trylock(struct ww_mutex *ww, struct ww_acquire_ctx *ctx) >> +{ >> + bool locked; >> + >> + if (!ctx) >> + return mutex_trylock(&ww->base); >> + >> +#ifdef CONFIG_DEBUG_MUTEXES >> + DEBUG_LOCKS_WARN_ON(ww->base.magic != &ww->base); >> +#endif >> + >> + preempt_disable(); >> + locked = __mutex_trylock(&ww->base); >> + >> + if (locked) { >> + ww_mutex_set_context_fastpath(ww, ctx); >> + mutex_acquire_nest(&ww->base.dep_map, 0, 1, &ctx->dep_map, _RET_IP_); >> + } >> + preempt_enable(); >> + >> + return locked; >> +} >> +EXPORT_SYMBOL(ww_mutex_trylock); >> + >> #ifdef CONFIG_DEBUG_LOCK_ALLOC >> void __sched >> mutex_lock_nested(struct mutex *lock, unsigned int subclass) >> diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c >> index 3f1fff7d2780..c4cb863edb4c 100644 >> --- a/kernel/locking/ww_rt_mutex.c >> +++ b/kernel/locking/ww_rt_mutex.c >> @@ -50,6 +50,18 @@ __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx, >> return ret; >> } >> >> +int __sched >> +ww_mutex_trylock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) >> +{ >> + int locked = rt_mutex_trylock(&lock->base); >> + >> + if (locked && ctx) >> + ww_mutex_set_context_fastpath(lock, ctx); >> + >> + return locked; >> +} >> +EXPORT_SYMBOL(ww_mutex_trylock); >> + >> int __sched >> ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) >> { > That doesn't look right, how's this for you? > > --- > --- a/kernel/locking/mutex.c > +++ b/kernel/locking/mutex.c > @@ -94,6 +94,9 @@ static inline unsigned long __owner_flag > return owner & MUTEX_FLAGS; > } > > +/* > + * Returns: __mutex_owner(lock) on failure or NULL on success. > + */ > static inline struct task_struct *__mutex_trylock_common(struct mutex *lock, bool handoff) > { > unsigned long owner, curr = (unsigned long)current; > @@ -736,6 +739,47 @@ __ww_mutex_lock(struct mutex *lock, unsi > return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true); > } > > +/** > + * ww_mutex_trylock - tries to acquire the w/w mutex with optional acquire context > + * @ww: mutex to lock > + * @ww_ctx: optional w/w acquire context > + * > + * Trylocks a mutex with the optional acquire context; no deadlock detection is > + * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise. > + * > + * Unlike ww_mutex_lock, no deadlock handling is performed. However, if a @ctx is > + * specified, -EALREADY handling may happen in calls to ww_mutex_trylock. > + * > + * A mutex acquired with this function must be released with ww_mutex_unlock. > + */ > +int ww_mutex_trylock(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx) > +{ > + if (!ww_ctx) > + return mutex_trylock(&ww->base); > + > + MUTEX_WARN_ON(ww->base.magic != &ww->base); > + > + if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) > + return -EALREADY; I'm not 100% sure this is a good idea, because it would make the trylock weird. For i915 I checked manually, because I didn't want to change the function signature. This is probably the other extreme. "if (ww_mutex_trylock())" would look correct, but actually be wrong and lead to double unlock without adjustments. Maybe we could make a ww_mutex_trylock_ctx_err, which would return -EALREADY or -EBUSY on failure, and 0 on success? We could keep ww_mutex_trylock without ctx, probably just #define as (!ww_mutex_trylock_ctx_err(lock, NULL)) > + /* > + * Reset the wounded flag after a kill. No other process can > + * race and wound us here, since they can't have a valid owner > + * pointer if we don't have any locks held. > + */ > + if (ww_ctx->acquired == 0) > + ww_ctx->wounded = 0; Yeah I guess this needs fixing too. Not completely sure since trylock wouldn't do the whole ww dance, but since it's our first lock, probably best to do so regardless so other users don't trip over it. > + > + if (__mutex_trylock(&ww->base)) { > + ww_mutex_set_context_fastpath(ww, ww_ctx); > + mutex_acquire_nest(&ww->base.dep_map, 0, 1, &ww_ctx->dep_map, _RET_IP_); > + return 1; > + } > + > + return 0; > +} > +EXPORT_SYMBOL(ww_mutex_trylock); > + > #ifdef CONFIG_DEBUG_LOCK_ALLOC > void __sched > mutex_lock_nested(struct mutex *lock, unsigned int subclass) > --- a/kernel/locking/ww_rt_mutex.c > +++ b/kernel/locking/ww_rt_mutex.c > @@ -9,6 +9,34 @@ > #define WW_RT > #include "rtmutex.c" > > +int ww_mutex_trylock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx) > +{ > + struct rt_mutex *rtm = &lock->base; > + > + if (!ww_ctx) > + return rt_mutex_trylock(rtm); > + > + if (unlikely(ww_ctx == READ_ONCE(lock->ctx))) > + return -EALREADY; > + > + /* > + * Reset the wounded flag after a kill. No other process can > + * race and wound us here, since they can't have a valid owner > + * pointer if we don't have any locks held. > + */ > + if (ww_ctx->acquired == 0) > + ww_ctx->wounded = 0; > + > + if (__rt_mutex_trylock(&rtm->rtmutex)) { > + ww_mutex_set_context_fastpath(lock, ww_ctx); > + mutex_acquire_nest(&rtm->dep_map, 0, 1, ww_ctx->dep_map, _RET_IP_); > + return 1; > + } > + > + return 0; > +} > +EXPORT_SYMBOL(ww_mutex_trylock); > + > static int __sched > __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx, > unsigned int state, unsigned long ip)
next prev parent reply other threads:[~2021-09-13 8:42 UTC|newest] Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-09-07 13:20 [PATCH] kernel/locking: Add context to ww_mutex_trylock Maarten Lankhorst 2021-09-07 13:20 ` [Intel-gfx] " Maarten Lankhorst 2021-09-07 13:54 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork 2021-09-07 14:27 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-09-07 14:59 ` [PATCH] " kernel test robot 2021-09-07 14:59 ` kernel test robot 2021-09-07 14:59 ` [Intel-gfx] " kernel test robot 2021-09-07 17:28 ` [Intel-gfx] ✓ Fi.CI.IGT: success for " Patchwork 2021-09-08 10:14 ` [PATCH] " Peter Zijlstra 2021-09-08 10:14 ` [Intel-gfx] " Peter Zijlstra 2021-09-08 18:30 ` Daniel Vetter 2021-09-08 18:30 ` [Intel-gfx] " Daniel Vetter 2021-09-09 5:38 ` Maarten Lankhorst 2021-09-09 5:38 ` [Intel-gfx] " Maarten Lankhorst 2021-09-09 8:22 ` Peter Zijlstra 2021-09-09 8:22 ` [Intel-gfx] " Peter Zijlstra 2021-09-09 9:32 ` [PATCH v2] " Maarten Lankhorst 2021-09-09 9:32 ` [Intel-gfx] " Maarten Lankhorst 2021-09-10 15:02 ` Peter Zijlstra 2021-09-10 15:02 ` [Intel-gfx] " Peter Zijlstra 2021-09-10 17:27 ` Peter Zijlstra 2021-09-10 17:27 ` [Intel-gfx] " Peter Zijlstra 2021-09-13 8:42 ` Maarten Lankhorst [this message] 2021-09-13 8:42 ` Maarten Lankhorst 2021-09-14 6:50 ` Peter Zijlstra 2021-09-14 6:50 ` [Intel-gfx] " Peter Zijlstra 2021-09-14 12:43 ` Maarten Lankhorst 2021-09-14 12:43 ` [Intel-gfx] " Maarten Lankhorst 2021-09-14 13:54 ` Daniel Vetter 2021-09-14 13:54 ` [Intel-gfx] " Daniel Vetter 2021-09-16 13:00 ` Maarten Lankhorst 2021-09-16 13:00 ` [Intel-gfx] " Maarten Lankhorst 2021-09-16 13:28 ` Peter Zijlstra 2021-09-16 13:28 ` [Intel-gfx] " Peter Zijlstra 2021-09-17 13:13 ` Peter Zijlstra 2021-09-17 13:13 ` [Intel-gfx] " Peter Zijlstra 2021-09-20 15:02 ` Joonas Lahtinen 2021-09-20 15:02 ` [Intel-gfx] " Joonas Lahtinen 2021-09-17 13:17 ` [tip: locking/core] kernel/locking: Add context to ww_mutex_trylock() tip-bot2 for Maarten Lankhorst 2021-11-04 12:27 ` [PATCH] kernel/locking: Use a pointer in ww_mutex_trylock() Sebastian Andrzej Siewior 2021-11-04 12:27 ` [Intel-gfx] " Sebastian Andrzej Siewior 2021-11-04 12:27 ` Sebastian Andrzej Siewior 2021-11-17 13:59 ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior 2021-09-10 18:06 ` [PATCH v2] kernel/locking: Add context to ww_mutex_trylock Mark Brown 2021-09-10 18:06 ` [Intel-gfx] " Mark Brown 2021-09-09 9:50 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for kernel/locking: Add context to ww_mutex_trylock. (rev2) Patchwork 2021-09-10 15:36 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for kernel/locking: Add context to ww_mutex_trylock. (rev3) Patchwork 2021-09-14 9:31 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for kernel/locking: Add context to ww_mutex_trylock. (rev4) Patchwork 2021-09-14 9:58 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork 2021-11-04 16:00 ` [Intel-gfx] ✓ Fi.CI.BAT: success for kernel/locking: Add context to ww_mutex_trylock. (rev5) Patchwork 2021-11-04 18:15 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=e8a7754e-23e7-0250-5718-101a56d008f0@linux.intel.com \ --to=maarten.lankhorst@linux.intel.com \ --cc=boqun.feng@gmail.com \ --cc=broonie@kernel.org \ --cc=daniel.vetter@ffwll.ch \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=lgirdwood@gmail.com \ --cc=linux-kernel@vger.kernel.org \ --cc=longman@redhat.com \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.