From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCHv2 1/3] rwlock: Add per-cpu reader-writer locks Date: Thu, 26 Nov 2015 05:17:51 -0700 Message-ID: <5657067F02000078000B9561@prv-mh.provo.novell.com> References: <1448035423-24242-1-git-send-email-malcolm.crossley@citrix.com> <1448035423-24242-2-git-send-email-malcolm.crossley@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1a1vUp-00036U-RY for xen-devel@lists.xenproject.org; Thu, 26 Nov 2015 12:17:55 +0000 In-Reply-To: <1448035423-24242-2-git-send-email-malcolm.crossley@citrix.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: malcolm.crossley@citrix.com Cc: keir@xen.org, ian.campbell@citrix.com, george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, Marcos.Matsunaga@oracle.com, stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org >>> On 20.11.15 at 17:03, wrote: > --- a/xen/common/spinlock.c > +++ b/xen/common/spinlock.c > @@ -10,6 +10,8 @@ > #include > #include > > +DEFINE_PER_CPU(cpumask_t, percpu_rwlock_readers); static (and perhaps even local to _percpu_write_lock()). > @@ -492,6 +494,42 @@ int _rw_is_write_locked(rwlock_t *lock) > return (lock->lock == RW_WRITE_FLAG); /* writer in critical section? */ > } > > +void _percpu_write_lock(percpu_rwlock_t **per_cpudata, > + percpu_rwlock_t *percpu_rwlock) > +{ > + unsigned int cpu; > + > + /* First take the write lock to protect against other writers. */ ... or slow path readers. > + write_lock(&percpu_rwlock->rwlock); > + > + /* Now set the global variable so that readers start using read_lock. */ > + percpu_rwlock->writer_activating = 1; > + smp_mb(); > + > + /* Using a per cpu cpumask is only safe if there is no nesting. */ > + ASSERT(!in_irq()); > + this_cpu(percpu_rwlock_readers) = cpu_online_map; cpumask_copy() please, to avoid copying perhaps half a kb on a pretty small system. > + /* Check if there are any percpu readers in progress on this rwlock. */ > + for ( ; ; ) > + { > + for_each_cpu(cpu, &this_cpu(percpu_rwlock_readers)) No more than one this_cpu{,_ptr}() please within one function. > + { > + /* > + * Remove any percpu readers not contending on this rwlock > + * from our check mask. > + */ Hard tabs. > + if ( per_cpu_ptr(per_cpudata, cpu) != percpu_rwlock ) > + cpumask_clear_cpu(cpu, &this_cpu(percpu_rwlock_readers)); __cpumask_clear_cpu() > +static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, const? > +#define percpu_read_lock(percpu, lock) ( _percpu_read_lock( \ > + &get_per_cpu_var(percpu), lock ) ) > +#define percpu_read_unlock(percpu, lock) ( _percpu_read_unlock( \ > + &get_per_cpu_var(percpu), lock ) ) > +#define percpu_write_lock(percpu, lock) ( _percpu_write_lock( \ > + &get_per_cpu_var(percpu), lock ) ) > +#define percpu_write_unlock(percpu, lock) ( _percpu_write_unlock( lock ) ) Perhaps easier to read as #define percpu_read_lock(percpu, lock) \ _percpu_read_lock(&get_per_cpu_var(percpu), lock) #define percpu_read_unlock(percpu, lock) \ _percpu_read_unlock(&get_per_cpu_var(percpu), lock) #define percpu_write_lock(percpu, lock) \ _percpu_write_lock(&get_per_cpu_var(percpu), lock) #define percpu_write_unlock(percpu, lock) _percpu_write_unlock(lock) But at the very least the stray blanks and parentheses should be dropped. Jan