From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752866AbdHJMLo (ORCPT ); Thu, 10 Aug 2017 08:11:44 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:37491 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752814AbdHJMLi (ORCPT ); Thu, 10 Aug 2017 08:11:38 -0400 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: byungchul.park@lge.com X-Original-SENDERIP: 10.177.220.161 X-Original-MAILFROM: byungchul.park@lge.com From: "Byungchul Park" To: "'Boqun Feng'" Cc: , , , , , , , , , , References: <1502089981-21272-1-git-send-email-byungchul.park@lge.com> <1502089981-21272-7-git-send-email-byungchul.park@lge.com> <20170810115922.kegrfeg6xz7mgpj4@tardis> In-Reply-To: <20170810115922.kegrfeg6xz7mgpj4@tardis> Subject: RE: [PATCH v8 06/14] lockdep: Detect and handle hist_lock ring buffer overwrite Date: Thu, 10 Aug 2017 21:11:32 +0900 Message-ID: <016b01d311d1$d02acfa0$70806ee0$@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook 14.0 Thread-Index: AQHv3DgxvyDpSqIKaWqUZtULSUZ7uwK70eS6Aojv8NaiGcK98A== Content-Language: ko Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > -----Original Message----- > From: Boqun Feng [mailto:boqun.feng@gmail.com] > Sent: Thursday, August 10, 2017 8:59 PM > To: Byungchul Park > Cc: peterz@infradead.org; mingo@kernel.org; tglx@linutronix.de; > walken@google.com; kirill@shutemov.name; linux-kernel@vger.kernel.org; > linux-mm@kvack.org; akpm@linux-foundation.org; willy@infradead.org; > npiggin@gmail.com; kernel-team@lge.com > Subject: Re: [PATCH v8 06/14] lockdep: Detect and handle hist_lock ring > buffer overwrite > > On Mon, Aug 07, 2017 at 04:12:53PM +0900, Byungchul Park wrote: > > The ring buffer can be overwritten by hardirq/softirq/work contexts. > > That cases must be considered on rollback or commit. For example, > > > > |<------ hist_lock ring buffer size ----->| > > ppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii > > wrapped > iiiiiiiiiiiiiiiiiiiiiii.................... > > > > where 'p' represents an acquisition in process context, > > 'i' represents an acquisition in irq context. > > > > On irq exit, crossrelease tries to rollback idx to original position, > > but it should not because the entry already has been invalid by > > overwriting 'i'. Avoid rollback or commit for entries overwritten. > > > > Signed-off-by: Byungchul Park > > --- > > include/linux/lockdep.h | 20 +++++++++++++++++++ > > include/linux/sched.h | 3 +++ > > kernel/locking/lockdep.c | 52 > +++++++++++++++++++++++++++++++++++++++++++----- > > 3 files changed, 70 insertions(+), 5 deletions(-) > > > > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h > > index 0c8a1b8..48c244c 100644 > > --- a/include/linux/lockdep.h > > +++ b/include/linux/lockdep.h > > @@ -284,6 +284,26 @@ struct held_lock { > > */ > > struct hist_lock { > > /* > > + * Id for each entry in the ring buffer. This is used to > > + * decide whether the ring buffer was overwritten or not. > > + * > > + * For example, > > + * > > + * |<----------- hist_lock ring buffer size ------->| > > + * pppppppppppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiii > > + * wrapped > iiiiiiiiiiiiiiiiiiiiiiiiiii....................... > > + * > > + * where 'p' represents an acquisition in process > > + * context, 'i' represents an acquisition in irq > > + * context. > > + * > > + * In this example, the ring buffer was overwritten by > > + * acquisitions in irq context, that should be detected on > > + * rollback or commit. > > + */ > > + unsigned int hist_id; > > + > > + /* > > * Seperate stack_trace data. This will be used at commit step. > > */ > > struct stack_trace trace; > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 5becef5..373466b 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -855,6 +855,9 @@ struct task_struct { > > unsigned int xhlock_idx; > > /* For restoring at history boundaries */ > > unsigned int xhlock_idx_hist[CONTEXT_NR]; > > + unsigned int hist_id; > > + /* For overwrite check at each context exit */ > > + unsigned int hist_id_save[CONTEXT_NR]; > > #endif > > > > #ifdef CONFIG_UBSAN > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > > index afd6e64..5168dac 100644 > > --- a/kernel/locking/lockdep.c > > +++ b/kernel/locking/lockdep.c > > @@ -4742,6 +4742,17 @@ void lockdep_rcu_suspicious(const char *file, > const int line, const char *s) > > static atomic_t cross_gen_id; /* Can be wrapped */ > > > > /* > > + * Make an entry of the ring buffer invalid. > > + */ > > +static inline void invalidate_xhlock(struct hist_lock *xhlock) > > +{ > > + /* > > + * Normally, xhlock->hlock.instance must be !NULL. > > + */ > > + xhlock->hlock.instance = NULL; > > +} > > + > > +/* > > * Lock history stacks; we have 3 nested lock history stacks: > > * > > * Hard IRQ > > @@ -4773,14 +4784,28 @@ void lockdep_rcu_suspicious(const char *file, > const int line, const char *s) > > */ > > void crossrelease_hist_start(enum context_t c) > > { > > - if (current->xhlocks) > > - current->xhlock_idx_hist[c] = current->xhlock_idx; > > + struct task_struct *cur = current; > > + > > + if (cur->xhlocks) { > > + cur->xhlock_idx_hist[c] = cur->xhlock_idx; > > + cur->hist_id_save[c] = cur->hist_id; > > + } > > } > > > > void crossrelease_hist_end(enum context_t c) > > { > > - if (current->xhlocks) > > - current->xhlock_idx = current->xhlock_idx_hist[c]; > > + struct task_struct *cur = current; > > + > > + if (cur->xhlocks) { > > + unsigned int idx = cur->xhlock_idx_hist[c]; > > + struct hist_lock *h = &xhlock(idx); > > + > > + cur->xhlock_idx = idx; > > + > > + /* Check if the ring was overwritten. */ > > + if (h->hist_id != cur->hist_id_save[c]) > > Could we use: > > if (h->hist_id != idx) No, we cannot. hist_id is a kind of timestamp and used to detect overwriting data into places of same indexes of the ring buffer. And idx is just an index. :) IOW, they mean different things. > > here, and > > > + invalidate_xhlock(h); > > + } > > } > > > > static int cross_lock(struct lockdep_map *lock) > > @@ -4826,6 +4851,7 @@ static inline int depend_after(struct held_lock > *hlock) > > * Check if the xhlock is valid, which would be false if, > > * > > * 1. Has not used after initializaion yet. > > + * 2. Got invalidated. > > * > > * Remind hist_lock is implemented as a ring buffer. > > */ > > @@ -4857,6 +4883,7 @@ static void add_xhlock(struct held_lock *hlock) > > > > /* Initialize hist_lock's members */ > > xhlock->hlock = *hlock; > > + xhlock->hist_id = current->hist_id++; > > use: > > xhlock->hist_id = idx; > > and, Same. > > > > > > xhlock->trace.nr_entries = 0; > > xhlock->trace.max_entries = MAX_XHLOCK_TRACE_ENTRIES; > > @@ -4995,6 +5022,7 @@ static int commit_xhlock(struct cross_lock *xlock, > struct hist_lock *xhlock) > > static void commit_xhlocks(struct cross_lock *xlock) > > { > > unsigned int cur = current->xhlock_idx; > > + unsigned int prev_hist_id = xhlock(cur).hist_id; > > use: > unsigned int prev_hist_id = cur; > > here. Same.