From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E85DEC43381 for ; Tue, 2 Feb 2021 19:02:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A875564EC1 for ; Tue, 2 Feb 2021 19:02:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239335AbhBBTCC (ORCPT ); Tue, 2 Feb 2021 14:02:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239186AbhBBS7l (ORCPT ); Tue, 2 Feb 2021 13:59:41 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15D0AC061352 for ; Tue, 2 Feb 2021 10:57:52 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id f51so13376668qtc.23 for ; Tue, 02 Feb 2021 10:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=vKyAzxjQBGxCc4s66Xnux4uLB1j6dF1t9W4yNEtDreQ=; b=HVr+0foHl89omFgYiz6hKgzbk/r1SQJ88Zs8DPDXsjmZAaq31A2cgqBUK1kliB2dLq sG7UO19OmzTl/YBi3LSFmr7pBXcvljQgn0wOS66SpuMBB7pmSijONEULiDLwfKRw++U8 UnvmKLWm/nh3u843jmu/z6PA6b9kBAR+2n9J/0CEs9rFiWetr2BijHsoDbVquego91zW 2zvCpwmtaPy/1UQyrXnbWOBwdho2TC8abniH8zMXYDH/L7+g0wOfGAm5YO+Ji83QReZL o3Bkd4l88MRsgyUSIXXqyfthEZuwthav/QQb0nWVw6aA+8wv8eJdiCeZeRjfKHYGoNMi 5vtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vKyAzxjQBGxCc4s66Xnux4uLB1j6dF1t9W4yNEtDreQ=; b=hG0QumU+HMfttW2Fb2+KKjZQhB5f1jskJqaIg59XFDX5yzSlh72/xRN+/TvdT0cOJl P8/H9CbCD+7VzXxG2nRS/zhmvxeQBhKwIHnaeKpbngUkdlhMqVqmMUr7Q/HJovT0B/av +r65/gZEr5+uS+iu7oESs8+pQtnZESEjISubYjtGFkAiYUIV43a9NFwxzp+CAfUxAaU2 /kC6Nhh4xj9QTuH2DsPzYt0hwQmEe3JzpnVzQ1q9WMyiA1pRut05R+xKXFZWlt0fxdLH R0GHbtqhiaSnQMbVcMsTVvg3SYR2T6fe35f0uvTFjX+PJ2zgJs3DRiUFM0AAJh+nPaHe 15UA== X-Gm-Message-State: AOAM533r1gJIerTG/4arJ0K89ZMDQjzwj6uG3JTK4NAafiPmWH7ECSgF v76E2kcBR5ckgVNbZ070Y8SgSZYbZwlmcNmhKkMLsAKqrt/p3ovU0ePoTSh2z+ztSVdy+plWkFk E16+3UyaCsf4ownD3Mu7OvZhClH9Mnn+0AqVMx+ugnAWCzIv9ksoFrC+hL7tkYJnLwwKWAWjS X-Google-Smtp-Source: ABdhPJyrU5qslteX5GpFz1sPXhfk9MRXnYKDBd+2HCN7lm6NRE5mDx0gL9uFCFJ8m+IP2rUuQOxyS3rObJ25 Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9090:561:5a98:6d47]) (user=bgardon job=sendgmr) by 2002:a05:6214:446:: with SMTP id cc6mr11232321qvb.31.1612292271200; Tue, 02 Feb 2021 10:57:51 -0800 (PST) Date: Tue, 2 Feb 2021 10:57:13 -0800 In-Reply-To: <20210202185734.1680553-1-bgardon@google.com> Message-Id: <20210202185734.1680553-8-bgardon@google.com> Mime-Version: 1.0 References: <20210202185734.1680553-1-bgardon@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH v2 07/28] sched: Add needbreak for rwlocks From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon , Ingo Molnar , Will Deacon , Peter Zijlstra , Davidlohr Bueso , Waiman Long Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Contention awareness while holding a spin lock is essential for reducing latency when long running kernel operations can hold that lock. Add the same contention detection interface for read/write spin locks. CC: Ingo Molnar CC: Will Deacon Acked-by: Peter Zijlstra Acked-by: Davidlohr Bueso Acked-by: Waiman Long Acked-by: Paolo Bonzini Signed-off-by: Ben Gardon --- include/linux/sched.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6e3a5eeec509..5d1378e5a040 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1912,6 +1912,23 @@ static inline int spin_needbreak(spinlock_t *lock) #endif } +/* + * Check if a rwlock is contended. + * Returns non-zero if there is another task waiting on the rwlock. + * Returns zero if the lock is not contended or the system / underlying + * rwlock implementation does not support contention detection. + * Technically does not depend on CONFIG_PREEMPTION, but a general need + * for low latency. + */ +static inline int rwlock_needbreak(rwlock_t *lock) +{ +#ifdef CONFIG_PREEMPTION + return rwlock_is_contended(lock); +#else + return 0; +#endif +} + static __always_inline bool need_resched(void) { return unlikely(tif_need_resched()); -- 2.30.0.365.g02bc693789-goog