From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752047AbdJESoX (ORCPT ); Thu, 5 Oct 2017 14:44:23 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47790 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751980AbdJESoS (ORCPT ); Thu, 5 Oct 2017 14:44:18 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 612CEC047B89 Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=longman@redhat.com From: Waiman Long To: Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Boqun Feng , Davidlohr Bueso , Waiman Long Subject: [PATCH v7 5/6] lib/dlock-list: Enable faster lookup with hashing Date: Thu, 5 Oct 2017 14:43:27 -0400 Message-Id: <1507229008-20569-6-git-send-email-longman@redhat.com> In-Reply-To: <1507229008-20569-1-git-send-email-longman@redhat.com> References: <1507229008-20569-1-git-send-email-longman@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Thu, 05 Oct 2017 18:44:17 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Insertion and deletion is relatively cheap and mostly contention free for dlock-list. Lookup, on the other hand, can be rather costly because all the lists in a dlock-list will have to be iterated. Currently dlock-list insertion is based on the cpu that the task is running on. So a given object can be inserted into any one of the lists depending on what the current cpu is. This patch provides an alternative way of list selection. The caller can provide a object context which will be hashed to one of the list in a dlock-list. The object can then be added into that particular list. Lookup can be done by iterating elements in the provided list only instead of all the lists in a dlock-list. The new APIs are: struct dlock_list_head *dlock_list_hash(struct dlock_list_heads *, void *); void dlock_list_add(struct dlock_list_node *, struct dlock_list_head *); Signed-off-by: Waiman Long --- include/linux/dlock-list.h | 9 +++++++++ lib/dlock-list.c | 49 +++++++++++++++++++++++++++++++++++++++------- 2 files changed, 51 insertions(+), 7 deletions(-) diff --git a/include/linux/dlock-list.h b/include/linux/dlock-list.h index 7940e524..16474ae 100644 --- a/include/linux/dlock-list.h +++ b/include/linux/dlock-list.h @@ -121,6 +121,15 @@ extern void dlock_lists_add(struct dlock_list_node *node, extern void dlock_lists_del(struct dlock_list_node *node); /* + * Instead of individual list mapping by CPU number, it can be based on + * a given context to speed up loockup performance. + */ +extern struct dlock_list_head *dlock_list_hash(struct dlock_list_heads *dlist, + void *context); +extern void dlock_list_add(struct dlock_list_node *node, + struct dlock_list_head *head); + +/* * Find the first entry of the next available list. */ extern struct dlock_list_node * diff --git a/lib/dlock-list.c b/lib/dlock-list.c index a045fd7..8cd0876 100644 --- a/lib/dlock-list.c +++ b/lib/dlock-list.c @@ -20,6 +20,7 @@ #include #include #include +#include /* * The distributed and locked list is a distributed set of lists each of @@ -163,6 +164,46 @@ bool dlock_lists_empty(struct dlock_list_heads *dlist) } /** + * dlock_list_hash - Hash the given context to a particular list + * @dlist: The dlock list + * @ctx : The context for hashing + */ +struct dlock_list_head *dlock_list_hash(struct dlock_list_heads *dlist, + void *ctx) +{ + unsigned long val = (unsigned long)ctx; + u32 hash; + + if (unlikely(!nr_dlock_lists)) { + WARN_ON_ONCE(1); + return &dlist->heads[0]; + } + if (val < nr_dlock_lists) + hash = val; + else + hash = jhash2((u32 *)&ctx, sizeof(ctx)/sizeof(u32), 0) + % nr_dlock_lists; + return &dlist->heads[hash]; +} + +/** + * dlock_list_add - Add a node to a particular head of dlock list + * @node: The node to be added + * @head: The dlock list head where the node is to be added + */ +void dlock_list_add(struct dlock_list_node *node, + struct dlock_list_head *head) +{ + /* + * There is no need to disable preemption + */ + spin_lock(&head->lock); + node->head = head; + list_add(&node->list, &head->list); + spin_unlock(&head->lock); +} + +/** * dlock_lists_add - Adds a node to the given dlock list * @node : The node to be added * @dlist: The dlock list where the node is to be added @@ -175,13 +216,7 @@ void dlock_lists_add(struct dlock_list_node *node, { struct dlock_list_head *head = &dlist->heads[this_cpu_read(cpu2idx)]; - /* - * There is no need to disable preemption - */ - spin_lock(&head->lock); - node->head = head; - list_add(&node->list, &head->list); - spin_unlock(&head->lock); + dlock_list_add(node, head); } /** -- 1.8.3.1