From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752844AbbJFQOE (ORCPT ); Tue, 6 Oct 2015 12:14:04 -0400 Received: from e32.co.us.ibm.com ([32.97.110.150]:40373 "EHLO e32.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752697AbbJFQN6 (ORCPT ); Tue, 6 Oct 2015 12:13:58 -0400 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com, Patrick Marlier , "Paul E. McKenney" Subject: [PATCH tip/core/rcu 11/13] rculist: Make list_entry_rcu() use lockless_dereference() Date: Tue, 6 Oct 2015 09:13:46 -0700 Message-Id: <1444148028-11551-11-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 2.5.2 In-Reply-To: <1444148028-11551-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20151006161305.GA9799@linux.vnet.ibm.com> <1444148028-11551-1-git-send-email-paulmck@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15100616-0005-0000-0000-000018BC7ECF Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Patrick Marlier The current list_entry_rcu() implementation copies the pointer to a stack variable, then invokes rcu_dereference_raw() on it. This results in an additional store-load pair. Now, most compilers will emit normal store and load instructions, which might seem to be of negligible overhead, but this results in a load-hit-store situation that can cause surprisingly long pipeline stalls, even on modern microprocessors. The problem is that it takes time for the store to get the store buffer updated, which can delay the subsequent load, which immediately follows. This commit therefore switches to the lockless_dereference() primitive, which does not expect the __rcu annotations (that are anyway not present in the list_head structure) and which, like rcu_dereference_raw(), does not check for an enclosing RCU read-side critical section. Most importantly, it does not copy the pointer, thus avoiding the load-hit-store overhead. Signed-off-by: Patrick Marlier [ paulmck: Switched to lockless_dereference() to suppress sparse warnings. ] Signed-off-by: Paul E. McKenney --- include/linux/rculist.h | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/include/linux/rculist.h b/include/linux/rculist.h index 17c6b1f84a77..5ed540986019 100644 --- a/include/linux/rculist.h +++ b/include/linux/rculist.h @@ -247,10 +247,7 @@ static inline void list_splice_init_rcu(struct list_head *list, * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock(). */ #define list_entry_rcu(ptr, type, member) \ -({ \ - typeof(*ptr) __rcu *__ptr = (typeof(*ptr) __rcu __force *)ptr; \ - container_of((typeof(ptr))rcu_dereference_raw(__ptr), type, member); \ -}) + container_of(lockless_dereference(ptr), type, member) /** * Where are list_empty_rcu() and list_first_entry_rcu()? -- 2.5.2