From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f194.google.com ([209.85.216.194]:34477 "EHLO mail-qt0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750950AbdJJFd6 (ORCPT ); Tue, 10 Oct 2017 01:33:58 -0400 Date: Tue, 10 Oct 2017 13:35:04 +0800 From: Boqun Feng To: Waiman Long Cc: Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Davidlohr Bueso Subject: Re: [PATCH v7 1/6] lib/dlock-list: Distributed and lock-protected lists Message-ID: <20171010053504.m37pzhammhgucqyy@tardis> References: <1507229008-20569-1-git-send-email-longman@redhat.com> <1507229008-20569-2-git-send-email-longman@redhat.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="k4q6jzglhyb26nq6" Content-Disposition: inline In-Reply-To: <1507229008-20569-2-git-send-email-longman@redhat.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: --k4q6jzglhyb26nq6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Thu, Oct 05, 2017 at 06:43:23PM +0000, Waiman Long wrote: [...] > +/* > + * As all the locks in the dlock list are dynamically allocated, they need > + * to belong to their own special lock class to avoid warning and stack > + * trace in kernel log when lockdep is enabled. Statically allocated locks > + * don't have this problem. > + */ > +static struct lock_class_key dlock_list_key; > + So in this way, you make all dlock_lists share the same lock_class_key, which means if there are two structures: struct some_a { ... struct dlock_list_heads dlists; }; struct some_b { ... struct dlock_list_heads dlists; }; some_a::dlists and some_b::dlists are going to have the same lockdep key, is this what you want? If not, you may want to do something like init_srcu_struct() does. > +/* > + * Initialize cpu2idx mapping table > + * > + * It is possible that a dlock-list can be allocated before the cpu2idx is > + * initialized. In this case, all the cpus are mapped to the first entry > + * before initialization. > + * > + */ > +static int __init cpu2idx_init(void) > +{ > + int idx, cpu; > + > + idx = 0; > + for_each_possible_cpu(cpu) > + per_cpu(cpu2idx, cpu) = idx++; > + return 0; > +} > +postcore_initcall(cpu2idx_init); > + > +/** > + * alloc_dlock_list_heads - Initialize and allocate the list of head entries > + * @dlist: Pointer to the dlock_list_heads structure to be initialized > + * Return: 0 if successful, -ENOMEM if memory allocation error > + * > + * This function does not allocate the dlock_list_heads structure itself. The > + * callers will have to do their own memory allocation, if necessary. However, > + * this allows embedding the dlock_list_heads structure directly into other > + * structures. > + */ > +int alloc_dlock_list_heads(struct dlock_list_heads *dlist) > +{ > + int idx; > + > + dlist->heads = kcalloc(nr_cpu_ids, sizeof(struct dlock_list_head), > + GFP_KERNEL); > + > + if (!dlist->heads) > + return -ENOMEM; > + > + for (idx = 0; idx < nr_cpu_ids; idx++) { > + struct dlock_list_head *head = &dlist->heads[idx]; > + > + INIT_LIST_HEAD(&head->list); > + head->lock = __SPIN_LOCK_UNLOCKED(&head->lock); > + lockdep_set_class(&head->lock, &dlock_list_key); > + } > + return 0; > +} > + > +/** > + * free_dlock_list_heads - Free all the heads entries of the dlock list > + * @dlist: Pointer of the dlock_list_heads structure to be freed > + * > + * This function doesn't free the dlock_list_heads structure itself. So > + * the caller will have to do it, if necessary. > + */ > +void free_dlock_list_heads(struct dlock_list_heads *dlist) > +{ > + kfree(dlist->heads); > + dlist->heads = NULL; > +} > + > +/** > + * dlock_lists_empty - Check if all the dlock lists are empty > + * @dlist: Pointer to the dlock_list_heads structure > + * Return: true if list is empty, false otherwise. > + * > + * This can be a pretty expensive function call. If this function is required > + * in a performance critical path, we may have to maintain a global count > + * of the list entries in the global dlock_list_heads structure instead. > + */ > +bool dlock_lists_empty(struct dlock_list_heads *dlist) > +{ > + int idx; > + > + for (idx = 0; idx < nr_cpu_ids; idx++) > + if (!list_empty(&dlist->heads[idx].list)) > + return false; > + return true; > +} > + > +/** > + * dlock_lists_add - Adds a node to the given dlock list > + * @node : The node to be added > + * @dlist: The dlock list where the node is to be added > + * > + * List selection is based on the CPU being used when the dlock_list_add() > + * function is called. However, deletion may be done by a different CPU. > + */ > +void dlock_lists_add(struct dlock_list_node *node, > + struct dlock_list_heads *dlist) > +{ > + struct dlock_list_head *head = &dlist->heads[this_cpu_read(cpu2idx)]; > + > + /* > + * There is no need to disable preemption > + */ > + spin_lock(&head->lock); > + node->head = head; > + list_add(&node->list, &head->list); > + spin_unlock(&head->lock); > +} > + > +/** > + * dlock_lists_del - Delete a node from a dlock list > + * @node : The node to be deleted > + * > + * We need to check the lock pointer again after taking the lock to guard > + * against concurrent deletion of the same node. If the lock pointer changes > + * (becomes NULL or to a different one), we assume that the deletion was done > + * elsewhere. A warning will be printed if this happens as it is likely to be > + * a bug. > + */ > +void dlock_lists_del(struct dlock_list_node *node) > +{ > + struct dlock_list_head *head; > + bool retry; > + > + do { > + head = READ_ONCE(node->head); Since we read node->head locklessly here, I think we should use WRITE_ONCE() for all the stores of node->head, to avoid store tearings? Regards, Boqun > + if (WARN_ONCE(!head, "%s: node 0x%lx has no associated head\n", > + __func__, (unsigned long)node)) > + return; > + > + spin_lock(&head->lock); > + if (likely(head == node->head)) { > + list_del_init(&node->list); > + node->head = NULL; > + retry = false; > + } else { > + /* > + * The lock has somehow changed. Retry again if it is > + * not NULL. Otherwise, just ignore the delete > + * operation. > + */ > + retry = (node->head != NULL); > + } > + spin_unlock(&head->lock); > + } while (retry); > +} > + [...] --k4q6jzglhyb26nq6 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQEzBAABCAAdFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAlncXAUACgkQSXnow7UH +rhWAQf+OmDGZLIp9QSs3TYUp1CyxMNCU4oMPU72OgagaGhcetz+Tx0iqGZ47JXj A6sSfCoDHTMuCnbA8P2Isijp8PztS5Azd2FOyjc2ipFxu5etjdtuPOmwMLjpbC6k 5RFLqUgzaALqABXPsH/c9M+7+R3x+bqeZu4vqLPAIBIAqfjAr3bSin4khSHLDoLh kfEViROLD9F8tzWSc+Gp7VyjG+U66Pi/GoCznb3sZ3ksdURZM4QjQ93SKLq3TRvF LcSmZcO3d0ClW42lqDzs+izXJ8pZYbmnx/OqwzeoKMnVeFZ/yqwqd6nBtawVEtp/ uj6nklmLvB9JHY0CFIhTSDXWcsDJow== =F+SZ -----END PGP SIGNATURE----- --k4q6jzglhyb26nq6--