From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:41240 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966040AbdJQThJ (ORCPT ); Tue, 17 Oct 2017 15:37:09 -0400 From: Waiman Long To: Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Boqun Feng , Davidlohr Bueso , Waiman Long Subject: [PATCH v7 9/9] lib/dlock-list: Unique lock class key for each allocation call site Date: Tue, 17 Oct 2017 15:36:36 -0400 Message-Id: <1508268996-8959-2-git-send-email-longman@redhat.com> In-Reply-To: <1508268996-8959-1-git-send-email-longman@redhat.com> References: <1507229008-20569-1-git-send-email-longman@redhat.com> <1508268996-8959-1-git-send-email-longman@redhat.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Boqun Feng has kindly pointed out that the same lock class key is used for all the dlock-list allocation. That can be a problem in case a task need to acquire the locks of more than one dlock-list at the same time with lockdep enabled. To avoid this problem, the alloc_dlock_list_heads() function is changed to use a different lock class key for each of its call sites in the kernel. Reported-by: Boqun Feng Signed-off-by: Waiman Long --- include/linux/dlock-list.h | 16 +++++++++++++++- lib/dlock-list.c | 21 +++++++++------------ 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/include/linux/dlock-list.h b/include/linux/dlock-list.h index 2ba7b4f..02c5f4d 100644 --- a/include/linux/dlock-list.h +++ b/include/linux/dlock-list.h @@ -116,9 +116,23 @@ static inline void dlock_list_relock(struct dlock_list_iter *iter) /* * Allocation and freeing of dlock list */ -extern int alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe); +extern int __alloc_dlock_list_heads(struct dlock_list_heads *dlist, + int irqsafe, struct lock_class_key *key); extern void free_dlock_list_heads(struct dlock_list_heads *dlist); +/** + * alloc_dlock_list_head - Initialize and allocate the list of head entries. + * @dlist : Pointer to the dlock_list_heads structure to be initialized + * @irqsafe: IRQ safe mode flag + * Return : 0 if successful, -ENOMEM if memory allocation error + */ +#define alloc_dlock_list_heads(dlist, irqsafe) \ +({ \ + static struct lock_class_key _key; \ + int _ret = __alloc_dlock_list_heads(dlist, irqsafe, &_key); \ + _ret; \ +}) + /* * Check if a dlock list is empty or not. */ diff --git a/lib/dlock-list.c b/lib/dlock-list.c index 6ce5c7193..17e182b 100644 --- a/lib/dlock-list.c +++ b/lib/dlock-list.c @@ -36,14 +36,6 @@ static int nr_dlock_lists __read_mostly; /* - * As all the locks in the dlock list are dynamically allocated, they need - * to belong to their own special lock class to avoid warning and stack - * trace in kernel log when lockdep is enabled. Statically allocated locks - * don't have this problem. - */ -static struct lock_class_key dlock_list_key; - -/* * Initialize cpu2idx mapping table & nr_dlock_lists. * * It is possible that a dlock-list can be allocated before the cpu2idx is @@ -98,9 +90,10 @@ static int __init cpu2idx_init(void) postcore_initcall(cpu2idx_init); /** - * alloc_dlock_list_heads - Initialize and allocate the list of head entries + * __alloc_dlock_list_heads - Initialize and allocate the list of head entries * @dlist : Pointer to the dlock_list_heads structure to be initialized * @irqsafe: IRQ safe mode flag + * @key : The lock class key to be used for lockdep * Return: 0 if successful, -ENOMEM if memory allocation error * * This function does not allocate the dlock_list_heads structure itself. The @@ -112,8 +105,12 @@ static int __init cpu2idx_init(void) * than necessary allocated is not a problem other than some wasted memory. * The extra lists will not be ever used as all the cpu2idx entries will be * 0 before initialization. + * + * Dynamically allocated locks need to have their own special lock class + * to avoid lockdep warning. */ -int alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe) +int __alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe, + struct lock_class_key *key) { int idx, cnt = nr_dlock_lists ? nr_dlock_lists : nr_cpu_ids; @@ -128,11 +125,11 @@ int alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe) INIT_LIST_HEAD(&head->list); head->lock = __SPIN_LOCK_UNLOCKED(&head->lock); head->irqsafe = irqsafe; - lockdep_set_class(&head->lock, &dlock_list_key); + lockdep_set_class(&head->lock, key); } return 0; } -EXPORT_SYMBOL(alloc_dlock_list_heads); +EXPORT_SYMBOL(__alloc_dlock_list_heads); /** * free_dlock_list_heads - Free all the heads entries of the dlock list -- 1.8.3.1