From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,T_DKIMWL_WL_HIGH,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83A21C28EBD for ; Sun, 9 Jun 2019 17:20:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5D00720693 for ; Sun, 9 Jun 2019 17:20:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560100811; bh=fwsJwNFaon2R+lR5nFdFKSyG4R4Y2sVc0fxcUZG60+4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=qaEefhLo2MVwgTB9RJiDdMU638X1fCa3XS0mAuE/hgaGtmAL412YcrcvYc6KNziV6 +9aEIzzflbE5tUCnVYsLiiJJoQ9C02TIr/OaD2GLNKn13BEPMQwtUSpMMNtTYqRPKt drNZZakfEJinT9aFEvuLsPVXO7ikEJE8aGNyfBME= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729800AbfFIQru (ORCPT ); Sun, 9 Jun 2019 12:47:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:46588 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730968AbfFIQrr (ORCPT ); Sun, 9 Jun 2019 12:47:47 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B25EC206C3; Sun, 9 Jun 2019 16:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560098867; bh=fwsJwNFaon2R+lR5nFdFKSyG4R4Y2sVc0fxcUZG60+4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uu9K6O7KArsoWMx/COkDeWFOEGmOfZma4SMP50njvvzSCZs98pSBJbF2KZgw+CJWw amOjrvrlLFlU7wvbHX9Y6XRfrz+vniH88xC5VlG8XCk7DY8oM4paqPTRy871tFmfjp Tcmdq0lzmGxQSTaVaW/qq5KUzKZq6XbhtN/kWYOE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Herbert Xu , stable@kernel.org, Boqun Feng , "Paul E. McKenney" , Linus Torvalds Subject: [PATCH 4.19 18/51] rcu: locking and unlocking need to always be at least barriers Date: Sun, 9 Jun 2019 18:41:59 +0200 Message-Id: <20190609164128.133131668@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190609164127.123076536@linuxfoundation.org> References: <20190609164127.123076536@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Linus Torvalds commit 66be4e66a7f422128748e3c3ef6ee72b20a6197b upstream. Herbert Xu pointed out that commit bb73c52bad36 ("rcu: Don't disable preemption for Tiny and Tree RCU readers") was incorrect in making the preempt_disable/enable() be conditional on CONFIG_PREEMPT_COUNT. If CONFIG_PREEMPT_COUNT isn't enabled, the preemption enable/disable is a no-op, but still is a compiler barrier. And RCU locking still _needs_ that compiler barrier. It is simply fundamentally not true that RCU locking would be a complete no-op: we still need to guarantee (for example) that things that can trap and cause preemption cannot migrate into the RCU locked region. The way we do that is by making it a barrier. See for example commit 386afc91144b ("spinlocks and preemption points need to be at least compiler barriers") from back in 2013 that had similar issues with spinlocks that become no-ops on UP: they must still constrain the compiler from moving other operations into the critical region. Now, it is true that a lot of RCU operations already use READ_ONCE() and WRITE_ONCE() (which in practice likely would never be re-ordered wrt anything remotely interesting), but it is also true that that is not globally the case, and that it's not even necessarily always possible (ie bitfields etc). Reported-by: Herbert Xu Fixes: bb73c52bad36 ("rcu: Don't disable preemption for Tiny and Tree RCU readers") Cc: stable@kernel.org Cc: Boqun Feng Cc: Paul E. McKenney Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/rcupdate.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -78,14 +78,12 @@ void synchronize_rcu(void); static inline void __rcu_read_lock(void) { - if (IS_ENABLED(CONFIG_PREEMPT_COUNT)) - preempt_disable(); + preempt_disable(); } static inline void __rcu_read_unlock(void) { - if (IS_ENABLED(CONFIG_PREEMPT_COUNT)) - preempt_enable(); + preempt_enable(); } static inline void synchronize_rcu(void)