From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1D8CC56202 for ; Wed, 18 Nov 2020 03:05:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A145924682 for ; Wed, 18 Nov 2020 03:05:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="eJFWgG56" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727278AbgKRDFI (ORCPT ); Tue, 17 Nov 2020 22:05:08 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:54960 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727182AbgKRDFH (ORCPT ); Tue, 17 Nov 2020 22:05:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605668705; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=G58oP3FaYh/RkuB0wsrYWeWxuIkLS43ToOPKdLtqVWY=; b=eJFWgG563gTyrumqRrv3mWRxaZsPfUjMK4Vgzi1g8VZkYygI6vWEX42AMTpCusv0HF/grE Ql9eN9DvmX7WLEjAgDSP/GRACsBhxbbZlLOMqGLoMKKd/WZ3x3jfsyCzWr3wzmEwc4AXLy v+sGyUnxm6vzA6d/hacrp2wqWsSfMD4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-167-rXnyxZ98NwSMdVrCWQipnQ-1; Tue, 17 Nov 2020 22:05:03 -0500 X-MC-Unique: rXnyxZ98NwSMdVrCWQipnQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4C42A188C134; Wed, 18 Nov 2020 03:05:02 +0000 (UTC) Received: from llong.com (ovpn-113-17.rdu2.redhat.com [10.10.113.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 917C25C1A3; Wed, 18 Nov 2020 03:05:01 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon Cc: linux-kernel@vger.kernel.org, Davidlohr Bueso , Phil Auld , Waiman Long Subject: [PATCH 3/5] locking/rwsem: Enable reader optimistic lock stealing Date: Tue, 17 Nov 2020 22:04:27 -0500 Message-Id: <20201118030429.23017-4-longman@redhat.com> In-Reply-To: <20201118030429.23017-1-longman@redhat.com> References: <20201118030429.23017-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If the optimistic spinning queue is empty and the rwsem does not have the handoff or write-lock bits set, it is actually not necessary to call rwsem_optimistic_spin() to spin on it. Instead, it can steal the lock directly as its reader bias is in the count already. If it is the first reader in this state, it will try to wake up other readers in the wait queue. Signed-off-by: Waiman Long --- kernel/locking/lock_events_list.h | 1 + kernel/locking/rwsem.c | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h index 239039d0ce21..270a0d351932 100644 --- a/kernel/locking/lock_events_list.h +++ b/kernel/locking/lock_events_list.h @@ -63,6 +63,7 @@ LOCK_EVENT(rwsem_opt_nospin) /* # of disabled optspins */ LOCK_EVENT(rwsem_opt_norspin) /* # of disabled reader-only optspins */ LOCK_EVENT(rwsem_opt_rlock2) /* # of opt-acquired 2ndary read locks */ LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */ +LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */ LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */ LOCK_EVENT(rwsem_rlock_fail) /* # of failed read lock acquisitions */ LOCK_EVENT(rwsem_rlock_handoff) /* # of read lock handoffs */ diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index ee374ae061c3..930dd4af3639 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -957,6 +957,12 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem, } return false; } + +static inline bool osq_is_empty(struct rw_semaphore *sem) +{ + return !osq_is_locked(&sem->osq); +} + #else static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable) @@ -977,6 +983,10 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem, return false; } +static inline bool osq_is_empty(sem) +{ + return false; +} static inline int rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable) { @@ -1007,6 +1017,22 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count) !(count & RWSEM_WRITER_LOCKED)) goto queue; + /* + * Reader optimistic lock stealing + * + * We can take the read lock directly without doing + * rwsem_optimistic_spin() if the conditions are right. + * Also wake up other readers if it is the first reader. + */ + if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) && + osq_is_empty(sem)) { + rwsem_set_reader_owned(sem); + lockevent_inc(rwsem_rlock_steal); + if (rcnt == 1) + goto wake_readers; + return sem; + } + /* * Save the current read-owner of rwsem, if available, and the * reader nonspinnable bit. @@ -1029,6 +1055,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count) * Wake up other readers in the wait list if the front * waiter is a reader. */ +wake_readers: if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) { raw_spin_lock_irq(&sem->wait_lock); if (!list_empty(&sem->wait_list)) -- 2.18.1