From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41566) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dG0AG-0002R8-CQ for qemu-devel@nongnu.org; Wed, 31 May 2017 05:43:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dG0AD-0001iN-OL for qemu-devel@nongnu.org; Wed, 31 May 2017 05:43:40 -0400 Sender: Paolo Bonzini From: Paolo Bonzini Date: Wed, 31 May 2017 11:43:21 +0200 Message-Id: <20170531094330.1808-3-pbonzini@redhat.com> In-Reply-To: <20170531094330.1808-1-pbonzini@redhat.com> References: <20170531094330.1808-1-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 02/11] coroutine-lock: add qemu_co_rwlock_downgrade and qemu_co_rwlock_upgrade List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kwolf@redhat.com, famz@redhat.com, qemu-block@nongnu.org, mreitz@redhat.com These functions are more efficient in the presence of contention. qemu_co_rwlock_downgrade also guarantees not to block, which may be useful in some algorithms too. Signed-off-by: Paolo Bonzini --- include/qemu/coroutine.h | 19 +++++++++++++++++++ util/qemu-coroutine-lock.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index a4509bd977..0ca96fe3d3 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -229,6 +229,24 @@ void qemu_co_rwlock_init(CoRwlock *lock); void qemu_co_rwlock_rdlock(CoRwlock *lock); /** + * Write Locks the CoRwlock from a reader. This is a bit more efficient than + * @qemu_co_rwlock_unlock followed by a separate @qemu_co_rwlock_wrlock. + * However, if the lock cannot be upgraded immediately, control is transferred + * to the caller of the current coroutine. Also, @qemu_co_rwlock_upgrade + * only overrides CoRwlock's fairness if there are no concurrent readers; + * another writer might run while @qemu_co_rwlock_upgrade blocks. + */ +void qemu_co_rwlock_upgrade(CoRwlock *lock); + +/** + * Downgrades a write-side critical section to a reader. Downgrading with + * @qemu_co_rwlock_downgrade never blocks, unlike @qemu_co_rwlock_unlock + * followed by @qemu_co_rwlock_rdlock. This makes it more efficient, and + * may also sometimes be necessary for correctness. + */ +void qemu_co_rwlock_downgrade(CoRwlock *lock); + +/** * Write Locks the mutex. If the lock cannot be taken immediately because * of a parallel reader, control is transferred to the caller of the current * coroutine. diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c index 6328eed26b..bcdcb91ee1 100644 --- a/util/qemu-coroutine-lock.c +++ b/util/qemu-coroutine-lock.c @@ -387,6 +387,21 @@ void qemu_co_rwlock_unlock(CoRwlock *lock) qemu_co_mutex_unlock(&lock->mutex); } +void qemu_co_rwlock_downgrade(CoRwlock *lock) +{ + Coroutine *self = qemu_coroutine_self(); + + /* lock->mutex critical section started in qemu_co_rwlock_wrlock or + * qemu_co_rwlock_upgrade. + */ + assert(lock->reader == 0); + lock->reader++; + qemu_co_mutex_unlock(&lock->mutex); + + /* The rest of the read-side critical section is run without the mutex. */ + self->locks_held++; +} + void qemu_co_rwlock_wrlock(CoRwlock *lock) { qemu_co_mutex_lock(&lock->mutex); @@ -401,3 +416,23 @@ void qemu_co_rwlock_wrlock(CoRwlock *lock) * There is no need to update self->locks_held. */ } + +void qemu_co_rwlock_upgrade(CoRwlock *lock) +{ + Coroutine *self = qemu_coroutine_self(); + + qemu_co_mutex_lock(&lock->mutex); + assert(lock->reader > 0); + lock->reader--; + lock->pending_writer++; + while (lock->reader) { + qemu_co_queue_wait(&lock->queue, &lock->mutex); + } + lock->pending_writer--; + + /* The rest of the write-side critical section is run with + * the mutex taken, similar to qemu_co_rwlock_wrlock. Do + * not account for the lock twice in self->locks_held. + */ + self->locks_held--; +} -- 2.13.0