From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753400Ab2G2SgJ (ORCPT ); Sun, 29 Jul 2012 14:36:09 -0400 Received: from mail-wi0-f172.google.com ([209.85.212.172]:59980 "EHLO mail-wi0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753363Ab2G2SgH (ORCPT ); Sun, 29 Jul 2012 14:36:07 -0400 Subject: Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores From: Eric Dumazet To: Mikulas Patocka Cc: Jens Axboe , Andrea Arcangeli , Jan Kara , dm-devel@redhat.com, linux-kernel@vger.kernel.org, Jeff Moyer , Alexander Viro , kosaki.motohiro@jp.fujitsu.com, linux-fsdevel@vger.kernel.org, lwoodman@redhat.com, "Alasdair G. Kergon" In-Reply-To: <1343556630.2626.13257.camel@edumazet-glaptop> References: <20120628111541.GB17515@quack.suse.cz> <1343508252.2626.13184.camel@edumazet-glaptop> <1343556630.2626.13257.camel@edumazet-glaptop> Content-Type: text/plain; charset="UTF-8" Date: Sun, 29 Jul 2012 20:36:02 +0200 Message-ID: <1343586962.2626.13266.camel@edumazet-glaptop> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 2012-07-29 at 12:10 +0200, Eric Dumazet wrote: > You can probably design something needing no more than 4 bytes per cpu, > and this thing could use non locked operations as bonus. > > like the following ... Coming back from my bike ride, here is a more polished version with proper synchronization/ barriers. struct percpu_rw_semaphore { /* percpu_sem_down_read() use the following in fast path */ unsigned int __percpu *active_counters; unsigned int __percpu *counters; struct rw_semaphore sem; /* used in slow path and by writers */ }; static inline int percpu_sem_init(struct percpu_rw_semaphore *p) { p->counters = alloc_percpu(unsigned int); if (!p->counters) return -ENOMEM; init_rwsem(&p->sem); rcu_assign_pointer(p->active_counters, p->counters); return 0; } static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p) { unsigned int __percpu *counters; rcu_read_lock(); counters = rcu_dereference(p->active_counters); if (counters) { this_cpu_inc(*counters); smp_wmb(); /* paired with smp_rmb() in percpu_count() */ rcu_read_unlock(); return true; } rcu_read_unlock(); down_read(&p->sem); return false; } static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool fastpath) { if (fastpath) this_cpu_dec(*p->counters); else up_read(&p->sem); } static inline unsigned int percpu_count(unsigned int __percpu *counters) { unsigned int total = 0; int cpu; for_each_possible_cpu(cpu) total += *per_cpu_ptr(counters, cpu); return total; } static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p) { down_write(&p->sem); p->active_counters = NULL; synchronize_rcu(); smp_rmb(); /* paired with smp_wmb() in percpu_sem_down_read() */ while (percpu_count(p->counters)) schedule(); } static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p) { rcu_assign_pointer(p->active_counters, p->counters); up_write(&p->sem); }