* [PATCH 0/2] lockdep cross-release fallout from -rc1 @ 2017-11-29 15:41 Daniel Vetter 2017-11-29 15:41 ` [PATCH 1/2] lockdep: finer-grained completion key for kthread Daniel Vetter 2017-11-29 15:41 ` [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS Daniel Vetter 0 siblings, 2 replies; 6+ messages in thread From: Daniel Vetter @ 2017-11-29 15:41 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar, Thomas Gleixner Cc: Intel Graphics Development, Tejun Heo, Kees Cook, Daniel Vetter Hi all, -rc1 set our CI on fire with a pile of issues that cross-release highlights. The two patches in this series get things back into working order on our side, so we pulled them into our local branches to unblock CI and drm/i915. But they're ofc far from polished, so pls look at this more as a bug report than bugfix submission :-) Cheers, Daniel Daniel Vetter (2): lockdep: finer-grained completion key for kthread lockdep: Up MAX_LOCKDEP_CHAINS include/linux/kthread.h | 48 ++++++++++++++++++++++---- kernel/kthread.c | 70 ++++++++++++++++++++++++++------------ kernel/locking/lockdep_internals.h | 2 +- 3 files changed, 91 insertions(+), 29 deletions(-) -- 2.15.0 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/2] lockdep: finer-grained completion key for kthread 2017-11-29 15:41 [PATCH 0/2] lockdep cross-release fallout from -rc1 Daniel Vetter @ 2017-11-29 15:41 ` Daniel Vetter 2017-12-04 8:16 ` Daniel Vetter 2017-11-29 15:41 ` [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS Daniel Vetter 1 sibling, 1 reply; 6+ messages in thread From: Daniel Vetter @ 2017-11-29 15:41 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar, Thomas Gleixner Cc: Intel Graphics Development, Tejun Heo, Kees Cook, Daniel Vetter, Tvrtko Ursulin, Marta Lofstedt, Daniel Vetter Ideally we'd create the key through a macro at the real callers and pass it all the way down. This would give us better coverage for cases where a bunch of kthreads are created for the same thing. But this gets the job done meanwhile and unblocks our CI. Refining later on is always possible. v2: - make it compile - use the right map (Tvrtko) v3: lockdep insist on a static key, so the cheap way didn't work. Wire 2 keys through all the callers. I didn't extend this up to alloc_workqueue because the lockdep_invariant_state() call should separate the work functions from the workqueue kthread logic and prevent cross-release state from leaking between unrelated work queues that happen to reuse the same kthreads. v4: CI found more compile fail :-/ Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> References: https://bugs.freedesktop.org/show_bug.cgi?id=103950 Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> --- include/linux/kthread.h | 48 ++++++++++++++++++++++++++++----- kernel/kthread.c | 70 ++++++++++++++++++++++++++++++++++--------------- 2 files changed, 90 insertions(+), 28 deletions(-) diff --git a/include/linux/kthread.h b/include/linux/kthread.h index c1961761311d..7a9463f0be5c 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -6,10 +6,12 @@ #include <linux/sched.h> #include <linux/cgroup.h> -__printf(4, 5) -struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), +__printf(6, 7) +struct task_struct *_kthread_create_on_node(int (*threadfn)(void *data), void *data, int node, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char namefmt[], ...); /** @@ -25,12 +27,27 @@ struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), */ #define kthread_create(threadfn, data, namefmt, arg...) \ kthread_create_on_node(threadfn, data, NUMA_NO_NODE, namefmt, ##arg) +#define kthread_create_on_node(threadfn, data, node, namefmt, arg...) \ +({ \ + static struct lock_class_key __exited_key, __parked_key; \ + _kthread_create_on_node(threadfn, data, node, &__exited_key, \ + &__parked_key, namefmt, ##arg); \ +}) -struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), +struct task_struct *_kthread_create_on_cpu(int (*threadfn)(void *data), void *data, unsigned int cpu, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char *namefmt); +#define kthread_create_on_cpu(threadfn, data, cpu, namefmt) \ +({ \ + static struct lock_class_key __exited_key, __parked_key; \ + _kthread_create_on_cpu(threadfn, data, cpu, &__exited_key,\ + &__parked_key, namefmt); \ +}) + /** * kthread_run - create and wake a thread. @@ -171,13 +188,30 @@ extern void __kthread_init_worker(struct kthread_worker *worker, int kthread_worker_fn(void *worker_ptr); -__printf(2, 3) +__printf(4, 5) struct kthread_worker * -kthread_create_worker(unsigned int flags, const char namefmt[], ...); +_kthread_create_worker(unsigned int flags, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, + const char namefmt[], ...); +#define kthread_create_worker(flags, namefmt...) \ +({ \ + static struct lock_class_key __exited_key, __parked_key; \ + _kthread_create_worker(flags, &__exited_key, &__parked_key, \ + ##namefmt); \ +}) -__printf(3, 4) struct kthread_worker * -kthread_create_worker_on_cpu(int cpu, unsigned int flags, +__printf(5, 6) struct kthread_worker * +_kthread_create_worker_on_cpu(int cpu, unsigned int flags, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char namefmt[], ...); +#define kthread_create_worker_on_cpu(cpu, flags, namefmt...) \ +({ \ + static struct lock_class_key __exited_key, __parked_key; \ + _kthread_create_worker_on_cpu(cpu, flags, &__exited_key, &__parked_key,\ + ##namefmt); \ +}) bool kthread_queue_work(struct kthread_worker *worker, struct kthread_work *work); diff --git a/kernel/kthread.c b/kernel/kthread.c index cd50e99202b0..9022806818fc 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -32,6 +32,7 @@ struct kthread_create_info int (*threadfn)(void *data); void *data; int node; + struct lock_class_key *exited_key, *parked_key; /* Result passed back to kthread_create() from kthreadd. */ struct task_struct *result; @@ -221,8 +222,17 @@ static int kthread(void *_create) } self->data = data; - init_completion(&self->exited); - init_completion(&self->parked); + /* these two completions are shared with all kthread, which is bonghist + * imo */ + lockdep_init_map_crosslock(&self->exited.map.map, + "(kthread completion)->exited", + create->exited_key, 0); + init_completion_map(&self->exited, &self->exited.map.map); + lockdep_init_map_crosslock(&self->parked.map.map, + "(kthread completion)->parked", + create->parked_key, 0); + init_completion_map(&self->parked, &self->exited.map.map); + current->vfork_done = &self->exited; /* OK, tell user we're spawned, wait for stop or wakeup */ @@ -272,9 +282,11 @@ static void create_kthread(struct kthread_create_info *create) } } -static __printf(4, 0) +static __printf(6, 0) struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), void *data, int node, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char namefmt[], va_list args) { @@ -289,6 +301,8 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), create->data = data; create->node = node; create->done = &done; + create->exited_key = exited_key; + create->parked_key = parked_key; spin_lock(&kthread_create_lock); list_add_tail(&create->list, &kthread_create_list); @@ -353,21 +367,24 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), * * Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR). */ -struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), - void *data, int node, - const char namefmt[], - ...) +struct task_struct *_kthread_create_on_node(int (*threadfn)(void *data), + void *data, int node, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, + const char namefmt[], + ...) { struct task_struct *task; va_list args; va_start(args, namefmt); - task = __kthread_create_on_node(threadfn, data, node, namefmt, args); + task = __kthread_create_on_node(threadfn, data, node, + exited_key, parked_key, namefmt, args); va_end(args); return task; } -EXPORT_SYMBOL(kthread_create_on_node); +EXPORT_SYMBOL(_kthread_create_on_node); static void __kthread_bind_mask(struct task_struct *p, const struct cpumask *mask, long state) { @@ -421,14 +438,16 @@ EXPORT_SYMBOL(kthread_bind); * Description: This helper function creates and names a kernel thread * The thread will be woken and put into park mode. */ -struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), +struct task_struct *_kthread_create_on_cpu(int (*threadfn)(void *data), void *data, unsigned int cpu, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char *namefmt) { struct task_struct *p; - p = kthread_create_on_node(threadfn, data, cpu_to_node(cpu), namefmt, - cpu); + p = _kthread_create_on_node(threadfn, data, cpu_to_node(cpu), + exited_key, parked_key, namefmt, cpu); if (IS_ERR(p)) return p; kthread_bind(p, cpu); @@ -649,8 +668,10 @@ int kthread_worker_fn(void *worker_ptr) } EXPORT_SYMBOL_GPL(kthread_worker_fn); -static __printf(3, 0) struct kthread_worker * +static __printf(5, 0) struct kthread_worker * __kthread_create_worker(int cpu, unsigned int flags, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char namefmt[], va_list args) { struct kthread_worker *worker; @@ -666,8 +687,8 @@ __kthread_create_worker(int cpu, unsigned int flags, if (cpu >= 0) node = cpu_to_node(cpu); - task = __kthread_create_on_node(kthread_worker_fn, worker, - node, namefmt, args); + task = __kthread_create_on_node(kthread_worker_fn, worker, node, + exited_key, parked_key, namefmt, args); if (IS_ERR(task)) goto fail_task; @@ -694,18 +715,22 @@ __kthread_create_worker(int cpu, unsigned int flags, * when the worker was SIGKILLed. */ struct kthread_worker * -kthread_create_worker(unsigned int flags, const char namefmt[], ...) +_kthread_create_worker(unsigned int flags, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, + const char namefmt[], ...) { struct kthread_worker *worker; va_list args; va_start(args, namefmt); - worker = __kthread_create_worker(-1, flags, namefmt, args); + worker = __kthread_create_worker(-1, flags, exited_key, parked_key, + namefmt, args); va_end(args); return worker; } -EXPORT_SYMBOL(kthread_create_worker); +EXPORT_SYMBOL(_kthread_create_worker); /** * kthread_create_worker_on_cpu - create a kthread worker and bind it @@ -725,19 +750,22 @@ EXPORT_SYMBOL(kthread_create_worker); * when the worker was SIGKILLed. */ struct kthread_worker * -kthread_create_worker_on_cpu(int cpu, unsigned int flags, +_kthread_create_worker_on_cpu(int cpu, unsigned int flags, + struct lock_class_key *exited_key, + struct lock_class_key *parked_key, const char namefmt[], ...) { struct kthread_worker *worker; va_list args; va_start(args, namefmt); - worker = __kthread_create_worker(cpu, flags, namefmt, args); + worker = __kthread_create_worker(cpu, flags, exited_key, parked_key, + namefmt, args); va_end(args); return worker; } -EXPORT_SYMBOL(kthread_create_worker_on_cpu); +EXPORT_SYMBOL(_kthread_create_worker_on_cpu); /* * Returns true when the work could not be queued at the moment. -- 2.15.0 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] lockdep: finer-grained completion key for kthread 2017-11-29 15:41 ` [PATCH 1/2] lockdep: finer-grained completion key for kthread Daniel Vetter @ 2017-12-04 8:16 ` Daniel Vetter 0 siblings, 0 replies; 6+ messages in thread From: Daniel Vetter @ 2017-12-04 8:16 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar, Thomas Gleixner Cc: Intel Graphics Development, Tejun Heo, Kees Cook, Daniel Vetter, Tvrtko Ursulin, Marta Lofstedt, Daniel Vetter On Wed, Nov 29, 2017 at 04:41:44PM +0100, Daniel Vetter wrote: > Ideally we'd create the key through a macro at the real callers and > pass it all the way down. This would give us better coverage for cases > where a bunch of kthreads are created for the same thing. > But this gets the job done meanwhile and unblocks our CI. Refining > later on is always possible. > > v2: > - make it compile > - use the right map (Tvrtko) > > v3: > > lockdep insist on a static key, so the cheap way didn't work. Wire > 2 keys through all the callers. > > I didn't extend this up to alloc_workqueue because the > lockdep_invariant_state() call should separate the work functions from > the workqueue kthread logic and prevent cross-release state from > leaking between unrelated work queues that happen to reuse the same > kthreads. > > v4: CI found more compile fail :-/ > > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > Cc: Marta Lofstedt <marta.lofstedt@intel.com> > References: https://bugs.freedesktop.org/show_bug.cgi?id=103950 > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Any comments on this issue here? Is there a real patch we should use instead of this quick hack? As mentioned in the cover letter, I don't really have much clue, this is more an elaborate bug report than anything like a real code submission :-) Thanks, Daniel > --- > include/linux/kthread.h | 48 ++++++++++++++++++++++++++++----- > kernel/kthread.c | 70 ++++++++++++++++++++++++++++++++++--------------- > 2 files changed, 90 insertions(+), 28 deletions(-) > > diff --git a/include/linux/kthread.h b/include/linux/kthread.h > index c1961761311d..7a9463f0be5c 100644 > --- a/include/linux/kthread.h > +++ b/include/linux/kthread.h > @@ -6,10 +6,12 @@ > #include <linux/sched.h> > #include <linux/cgroup.h> > > -__printf(4, 5) > -struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), > +__printf(6, 7) > +struct task_struct *_kthread_create_on_node(int (*threadfn)(void *data), > void *data, > int node, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char namefmt[], ...); > > /** > @@ -25,12 +27,27 @@ struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), > */ > #define kthread_create(threadfn, data, namefmt, arg...) \ > kthread_create_on_node(threadfn, data, NUMA_NO_NODE, namefmt, ##arg) > +#define kthread_create_on_node(threadfn, data, node, namefmt, arg...) \ > +({ \ > + static struct lock_class_key __exited_key, __parked_key; \ > + _kthread_create_on_node(threadfn, data, node, &__exited_key, \ > + &__parked_key, namefmt, ##arg); \ > +}) > > > -struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), > +struct task_struct *_kthread_create_on_cpu(int (*threadfn)(void *data), > void *data, > unsigned int cpu, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char *namefmt); > +#define kthread_create_on_cpu(threadfn, data, cpu, namefmt) \ > +({ \ > + static struct lock_class_key __exited_key, __parked_key; \ > + _kthread_create_on_cpu(threadfn, data, cpu, &__exited_key,\ > + &__parked_key, namefmt); \ > +}) > + > > /** > * kthread_run - create and wake a thread. > @@ -171,13 +188,30 @@ extern void __kthread_init_worker(struct kthread_worker *worker, > > int kthread_worker_fn(void *worker_ptr); > > -__printf(2, 3) > +__printf(4, 5) > struct kthread_worker * > -kthread_create_worker(unsigned int flags, const char namefmt[], ...); > +_kthread_create_worker(unsigned int flags, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > + const char namefmt[], ...); > +#define kthread_create_worker(flags, namefmt...) \ > +({ \ > + static struct lock_class_key __exited_key, __parked_key; \ > + _kthread_create_worker(flags, &__exited_key, &__parked_key, \ > + ##namefmt); \ > +}) > > -__printf(3, 4) struct kthread_worker * > -kthread_create_worker_on_cpu(int cpu, unsigned int flags, > +__printf(5, 6) struct kthread_worker * > +_kthread_create_worker_on_cpu(int cpu, unsigned int flags, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char namefmt[], ...); > +#define kthread_create_worker_on_cpu(cpu, flags, namefmt...) \ > +({ \ > + static struct lock_class_key __exited_key, __parked_key; \ > + _kthread_create_worker_on_cpu(cpu, flags, &__exited_key, &__parked_key,\ > + ##namefmt); \ > +}) > > bool kthread_queue_work(struct kthread_worker *worker, > struct kthread_work *work); > diff --git a/kernel/kthread.c b/kernel/kthread.c > index cd50e99202b0..9022806818fc 100644 > --- a/kernel/kthread.c > +++ b/kernel/kthread.c > @@ -32,6 +32,7 @@ struct kthread_create_info > int (*threadfn)(void *data); > void *data; > int node; > + struct lock_class_key *exited_key, *parked_key; > > /* Result passed back to kthread_create() from kthreadd. */ > struct task_struct *result; > @@ -221,8 +222,17 @@ static int kthread(void *_create) > } > > self->data = data; > - init_completion(&self->exited); > - init_completion(&self->parked); > + /* these two completions are shared with all kthread, which is bonghist > + * imo */ > + lockdep_init_map_crosslock(&self->exited.map.map, > + "(kthread completion)->exited", > + create->exited_key, 0); > + init_completion_map(&self->exited, &self->exited.map.map); > + lockdep_init_map_crosslock(&self->parked.map.map, > + "(kthread completion)->parked", > + create->parked_key, 0); > + init_completion_map(&self->parked, &self->exited.map.map); > + > current->vfork_done = &self->exited; > > /* OK, tell user we're spawned, wait for stop or wakeup */ > @@ -272,9 +282,11 @@ static void create_kthread(struct kthread_create_info *create) > } > } > > -static __printf(4, 0) > +static __printf(6, 0) > struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), > void *data, int node, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char namefmt[], > va_list args) > { > @@ -289,6 +301,8 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), > create->data = data; > create->node = node; > create->done = &done; > + create->exited_key = exited_key; > + create->parked_key = parked_key; > > spin_lock(&kthread_create_lock); > list_add_tail(&create->list, &kthread_create_list); > @@ -353,21 +367,24 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), > * > * Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR). > */ > -struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), > - void *data, int node, > - const char namefmt[], > - ...) > +struct task_struct *_kthread_create_on_node(int (*threadfn)(void *data), > + void *data, int node, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > + const char namefmt[], > + ...) > { > struct task_struct *task; > va_list args; > > va_start(args, namefmt); > - task = __kthread_create_on_node(threadfn, data, node, namefmt, args); > + task = __kthread_create_on_node(threadfn, data, node, > + exited_key, parked_key, namefmt, args); > va_end(args); > > return task; > } > -EXPORT_SYMBOL(kthread_create_on_node); > +EXPORT_SYMBOL(_kthread_create_on_node); > > static void __kthread_bind_mask(struct task_struct *p, const struct cpumask *mask, long state) > { > @@ -421,14 +438,16 @@ EXPORT_SYMBOL(kthread_bind); > * Description: This helper function creates and names a kernel thread > * The thread will be woken and put into park mode. > */ > -struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), > +struct task_struct *_kthread_create_on_cpu(int (*threadfn)(void *data), > void *data, unsigned int cpu, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char *namefmt) > { > struct task_struct *p; > > - p = kthread_create_on_node(threadfn, data, cpu_to_node(cpu), namefmt, > - cpu); > + p = _kthread_create_on_node(threadfn, data, cpu_to_node(cpu), > + exited_key, parked_key, namefmt, cpu); > if (IS_ERR(p)) > return p; > kthread_bind(p, cpu); > @@ -649,8 +668,10 @@ int kthread_worker_fn(void *worker_ptr) > } > EXPORT_SYMBOL_GPL(kthread_worker_fn); > > -static __printf(3, 0) struct kthread_worker * > +static __printf(5, 0) struct kthread_worker * > __kthread_create_worker(int cpu, unsigned int flags, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char namefmt[], va_list args) > { > struct kthread_worker *worker; > @@ -666,8 +687,8 @@ __kthread_create_worker(int cpu, unsigned int flags, > if (cpu >= 0) > node = cpu_to_node(cpu); > > - task = __kthread_create_on_node(kthread_worker_fn, worker, > - node, namefmt, args); > + task = __kthread_create_on_node(kthread_worker_fn, worker, node, > + exited_key, parked_key, namefmt, args); > if (IS_ERR(task)) > goto fail_task; > > @@ -694,18 +715,22 @@ __kthread_create_worker(int cpu, unsigned int flags, > * when the worker was SIGKILLed. > */ > struct kthread_worker * > -kthread_create_worker(unsigned int flags, const char namefmt[], ...) > +_kthread_create_worker(unsigned int flags, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > + const char namefmt[], ...) > { > struct kthread_worker *worker; > va_list args; > > va_start(args, namefmt); > - worker = __kthread_create_worker(-1, flags, namefmt, args); > + worker = __kthread_create_worker(-1, flags, exited_key, parked_key, > + namefmt, args); > va_end(args); > > return worker; > } > -EXPORT_SYMBOL(kthread_create_worker); > +EXPORT_SYMBOL(_kthread_create_worker); > > /** > * kthread_create_worker_on_cpu - create a kthread worker and bind it > @@ -725,19 +750,22 @@ EXPORT_SYMBOL(kthread_create_worker); > * when the worker was SIGKILLed. > */ > struct kthread_worker * > -kthread_create_worker_on_cpu(int cpu, unsigned int flags, > +_kthread_create_worker_on_cpu(int cpu, unsigned int flags, > + struct lock_class_key *exited_key, > + struct lock_class_key *parked_key, > const char namefmt[], ...) > { > struct kthread_worker *worker; > va_list args; > > va_start(args, namefmt); > - worker = __kthread_create_worker(cpu, flags, namefmt, args); > + worker = __kthread_create_worker(cpu, flags, exited_key, parked_key, > + namefmt, args); > va_end(args); > > return worker; > } > -EXPORT_SYMBOL(kthread_create_worker_on_cpu); > +EXPORT_SYMBOL(_kthread_create_worker_on_cpu); > > /* > * Returns true when the work could not be queued at the moment. > -- > 2.15.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS 2017-11-29 15:41 [PATCH 0/2] lockdep cross-release fallout from -rc1 Daniel Vetter 2017-11-29 15:41 ` [PATCH 1/2] lockdep: finer-grained completion key for kthread Daniel Vetter @ 2017-11-29 15:41 ` Daniel Vetter 2017-11-30 7:47 ` Peter Zijlstra 1 sibling, 1 reply; 6+ messages in thread From: Daniel Vetter @ 2017-11-29 15:41 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar, Thomas Gleixner Cc: Intel Graphics Development, Tejun Heo, Kees Cook, Daniel Vetter, Tvrtko Ursulin, Marta Lofstedt, Daniel Vetter cross-release ftl >From Chris: "Fwiw, this isn't cross-release but us reloading the module many times, creating a whole host of new lockclasses. Even more fun is when the module gets a slightly different address and the new lock address hashes into an old lock... "I did think about a module-hook to revoke the stale lockclasses, but that still leaves all the hashed chains. "This particular nuisance was temporarily pushed back by teaching igt not to reload i915.ko on a whim." Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> References: https://bugs.freedesktop.org/show_bug.cgi?id=103707 Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> --- kernel/locking/lockdep_internals.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h index d459d624ba2a..41630a5385c6 100644 --- a/kernel/locking/lockdep_internals.h +++ b/kernel/locking/lockdep_internals.h @@ -69,7 +69,7 @@ enum { #else #define MAX_LOCKDEP_ENTRIES 32768UL -#define MAX_LOCKDEP_CHAINS_BITS 16 +#define MAX_LOCKDEP_CHAINS_BITS 17 /* * Stack-trace: tightly packed array of stack backtrace -- 2.15.0 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS 2017-11-29 15:41 ` [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS Daniel Vetter @ 2017-11-30 7:47 ` Peter Zijlstra 2017-11-30 8:08 ` Daniel Vetter 0 siblings, 1 reply; 6+ messages in thread From: Peter Zijlstra @ 2017-11-30 7:47 UTC (permalink / raw) To: Daniel Vetter Cc: LKML, Ingo Molnar, Thomas Gleixner, Intel Graphics Development, Tejun Heo, Kees Cook, Tvrtko Ursulin, Marta Lofstedt, Daniel Vetter On Wed, Nov 29, 2017 at 04:41:45PM +0100, Daniel Vetter wrote: > cross-release ftl > > From Chris: > > "Fwiw, this isn't cross-release but us reloading the module many times, > creating a whole host of new lockclasses. Even more fun is when the > module gets a slightly different address and the new lock address hashes > into an old lock... Yeah, this is a known issue, just reboot. > "I did think about a module-hook to revoke the stale lockclasses, but > that still leaves all the hashed chains. Its an absolute royal pain to remove all the resources consumed by a module, and if you manage you then have to deal with fragmented storage -- that is, we need to go keep track of which entries are used. Its a giant heap of complexity that's just not worth it. Given all that, I don't see why we should up this. Just don't reload modules (or better, don't use modules at all). ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS 2017-11-30 7:47 ` Peter Zijlstra @ 2017-11-30 8:08 ` Daniel Vetter 0 siblings, 0 replies; 6+ messages in thread From: Daniel Vetter @ 2017-11-30 8:08 UTC (permalink / raw) To: Peter Zijlstra Cc: Daniel Vetter, LKML, Ingo Molnar, Thomas Gleixner, Intel Graphics Development, Tejun Heo, Kees Cook, Tvrtko Ursulin, Marta Lofstedt, Daniel Vetter On Thu, Nov 30, 2017 at 08:47:18AM +0100, Peter Zijlstra wrote: > On Wed, Nov 29, 2017 at 04:41:45PM +0100, Daniel Vetter wrote: > > cross-release ftl > > > > From Chris: > > > > "Fwiw, this isn't cross-release but us reloading the module many times, > > creating a whole host of new lockclasses. Even more fun is when the > > module gets a slightly different address and the new lock address hashes > > into an old lock... > > Yeah, this is a known issue, just reboot. > > > "I did think about a module-hook to revoke the stale lockclasses, but > > that still leaves all the hashed chains. > > Its an absolute royal pain to remove all the resources consumed by a > module, and if you manage you then have to deal with fragmented storage > -- that is, we need to go keep track of which entries are used. > > Its a giant heap of complexity that's just not worth it. > > > Given all that, I don't see why we should up this. Just don't reload > modules (or better, don't use modules at all). We use excessive amounts of module reloading to validate the failure paths of driver load. Rebooting takes too much time. I guess we could look into just rebinding the driver without reloading, that should take the pain off lockdep. Meanwhile we can carry this locally. I just included this to check whether you have any plans, thanks for clarifying that this is not worth it from a core perspective to get fixed. The real issue we have in CI is the one the first patch papers over. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-12-04 8:16 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-11-29 15:41 [PATCH 0/2] lockdep cross-release fallout from -rc1 Daniel Vetter 2017-11-29 15:41 ` [PATCH 1/2] lockdep: finer-grained completion key for kthread Daniel Vetter 2017-12-04 8:16 ` Daniel Vetter 2017-11-29 15:41 ` [PATCH 2/2] lockdep: Up MAX_LOCKDEP_CHAINS Daniel Vetter 2017-11-30 7:47 ` Peter Zijlstra 2017-11-30 8:08 ` Daniel Vetter
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).