From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=3.0 tests=INCLUDES_PATCH, INCLUDES_PULL_REQUEST,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD667C432C0 for ; Mon, 2 Dec 2019 19:26:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5185120833 for ; Mon, 2 Dec 2019 19:26:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5185120833 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 930336B0003; Mon, 2 Dec 2019 14:26:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E0E96B0005; Mon, 2 Dec 2019 14:26:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F7AF6B0007; Mon, 2 Dec 2019 14:26:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 68FD46B0003 for ; Mon, 2 Dec 2019 14:26:47 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 08DF42816 for ; Mon, 2 Dec 2019 19:26:47 +0000 (UTC) X-FDA: 76221183654.25.coat86_7565f8dcd8d3f X-HE-Tag: coat86_7565f8dcd8d3f X-Filterd-Recvd-Size: 5336 Received: from mail-qv1-f65.google.com (mail-qv1-f65.google.com [209.85.219.65]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Dec 2019 19:26:46 +0000 (UTC) Received: by mail-qv1-f65.google.com with SMTP id o18so356497qvf.1 for ; Mon, 02 Dec 2019 11:26:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:user-agent; bh=sRC578IzxYpnWQw6x6fP98N8MlOa2CYbZCkgjfiYCog=; b=oxgHEdEivdOcRRzm/udPpsDSsYB7bs+s/sFfIuXipYz2fVeireTjA9RbgescrdtVBH A8WbfCRmpOtqnaJGrIeizQlUII98TruhmFC528ARrTeSc9Lf03lDPoK0/i3MuA1D4SAd bevPwJaS/hsnnhv+Wk0fMpAH9g+uniHe6/o4zABtH1m+M8Ju7fedKLraJPIhmn1frriP dcUgXmp0ZWyofJl+dX2/audBg4kthBOwV07LjGo6ZOEU5kGM5s5nV1WmK2A36nuATOT5 T9sCu9/kxP6X9FUIVtBULhc0RCt9MPp0Pk3LrFUjPu89MdT9RRVUTcIZhzd0TDln0rHh Sp+w== X-Gm-Message-State: APjAAAVkjCOOJyJq1zE4LHppCnIIRgRwU+M1VYI+tsKc6PO8cDhzn6Q3 DGSoWraKc0c5nKwaX0yeS0g= X-Google-Smtp-Source: APXvYqxC6MUdKhcZCNnymc7mHDpEnf3nE/+STk6uqyB6Ap66XIGy055VTTmeEjxkfiC7Iq0gIoU5fA== X-Received: by 2002:a05:6214:5ac:: with SMTP id by12mr758100qvb.74.1575314806019; Mon, 02 Dec 2019 11:26:46 -0800 (PST) Received: from dennisz-mbp ([2620:10d:c091:500::3:2086]) by smtp.gmail.com with ESMTPSA id u24sm307708qkm.40.2019.12.02.11.26.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2019 11:26:45 -0800 (PST) Date: Mon, 2 Dec 2019 14:26:43 -0500 From: Dennis Zhou To: Linus Torvalds Cc: Tejun Heo , Christoph Lameter , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [GIT PULL] percpu changes for v5.5-rc1 Message-ID: <20191202192643.GA19946@dennisz-mbp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.12.2 (2019-09-21) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Linus, This pull request has a change to fix percpu-refcount for RT kernels because rcu-sched disables preemption and the refcount release callback might acquire a spinlock. In the works is to add memcg counting for percpu by Roman Gushchin. That may land in either for-5.6 or for-5.7. There is also some sparse warnings that we're sorting out now. Thanks, Dennis The following changes since commit 4f5cafb5cb8471e54afdc9054d973535614f7675: Linux 5.4-rc3 (2019-10-13 16:37:36 -0700) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-5.5 for you to fetch changes up to ba30e27405afa0b13b79532a345977b3e58ad501: Revert "percpu: add __percpu to SHIFT_PERCPU_PTR" (2019-11-25 14:28:04 -0800) ---------------------------------------------------------------- Ben Dooks (1): percpu: add __percpu to SHIFT_PERCPU_PTR Dennis Zhou (1): Revert "percpu: add __percpu to SHIFT_PERCPU_PTR" Sebastian Andrzej Siewior (1): percpu-refcount: Use normal instead of RCU-sched" include/linux/percpu-refcount.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 7aef0abc194a..390031e816dc 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -186,14 +186,14 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr) { unsigned long __percpu *percpu_count; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) this_cpu_add(*percpu_count, nr); else atomic_long_add(nr, &ref->count); - rcu_read_unlock_sched(); + rcu_read_unlock(); } /** @@ -223,7 +223,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref) unsigned long __percpu *percpu_count; bool ret; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) { this_cpu_inc(*percpu_count); @@ -232,7 +232,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref) ret = atomic_long_inc_not_zero(&ref->count); } - rcu_read_unlock_sched(); + rcu_read_unlock(); return ret; } @@ -257,7 +257,7 @@ static inline bool percpu_ref_tryget_live(struct percpu_ref *ref) unsigned long __percpu *percpu_count; bool ret = false; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) { this_cpu_inc(*percpu_count); @@ -266,7 +266,7 @@ static inline bool percpu_ref_tryget_live(struct percpu_ref *ref) ret = atomic_long_inc_not_zero(&ref->count); } - rcu_read_unlock_sched(); + rcu_read_unlock(); return ret; } @@ -285,14 +285,14 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr) { unsigned long __percpu *percpu_count; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) this_cpu_sub(*percpu_count, nr); else if (unlikely(atomic_long_sub_and_test(nr, &ref->count))) ref->release(ref); - rcu_read_unlock_sched(); + rcu_read_unlock(); } /**