From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Pranith Kumar <bobby.prani@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>,
Steven Rostedt <rostedt@goodmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Lai Jiangshan <laijs@cn.fujitsu.com>,
"open list:READ-COPY UPDATE..." <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
Date: Wed, 23 Jul 2014 13:23:20 -0700 [thread overview]
Message-ID: <20140723202320.GF11241@linux.vnet.ibm.com> (raw)
In-Reply-To: <53D0180B.40709@gmail.com>
On Wed, Jul 23, 2014 at 04:16:11PM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 3:15 PM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> >> > If you change the "awake" to something like "am_online", I could get
> >> > behind this one.
> >>
> >> OK! I will submit that in the next series(with the zalloc check).
> >
> > You caught me at a weak moment... This change just adds an extra
> > line of code and doesn't really help anything.
> >
> > So please leave this one out.
> >
>
> <resending as the assembly was garbled>
>
> It adds an extra line of code and generates better assembly code. Last
> try to convince you before I give up :-)
If you got this kind of savings in __rcu_read_lock() or
__rcu_read_unlock(), I might be interested. Hard to get excited about
__call_rcu_core(), especially given that a smarter compiler might be
able to make this transformation on its own.
Thanx, Paul
> Size:
> text data bss dec hex filename
> before 30664 7844 32 38540 968c kernel/rcu/tree.o
> after 30648 7844 32 38524 967c kernel/rcu/tree.o
>
> Assembly:
>
> Before:
>
> if (!rcu_is_watching() && cpu_online(smp_processor_id()))
> 26d3: 83 e2 01 and $0x1,%edx
> 26d6: 75 1f jne 26f7 <__call_rcu+0x1c7>
> 26d8: 65 8b 14 25 00 00 00 mov %gs:0x0,%edx
> 26df: 00
> 26dc: R_X86_64_32S cpu_number
> 26e0: 48 8b 0d 00 00 00 00 mov 0x0(%rip),%rcx # 26e7 <__call_rcu+0x1b7>
> 26e3: R_X86_64_PC32 cpu_online_mask-0x4
> 26e7: 89 d2 mov %edx,%edx
> 26e9: 48 0f a3 11 bt %rdx,(%rcx)
> 26ed: 19 d2 sbb %edx,%edx
> 26ef: 85 d2 test %edx,%edx
> 26f1: 0f 85 29 02 00 00 jne 2920 <__call_rcu+0x3f0>
> invoke_rcu_core();
>
> /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
> 26f7: 48 f7 45 d0 00 02 00 testq $0x200,-0x30(%rbp)
> 26fe: 00
> 26ff: 0f 84 e6 fe ff ff je 25eb <__call_rcu+0xbb>
> 2705: 65 8b 14 25 00 00 00 mov %gs:0x0,%edx
> 270c: 00
> 2709: R_X86_64_32S cpu_number
> 270d: 48 8b 0d 00 00 00 00 mov 0x0(%rip),%rcx # 2714 <__call_rcu+0x1e4>
> 2710: R_X86_64_PC32 cpu_online_mask-0x4
> 2714: 89 d2 mov %edx,%edx
> 2716: 48 0f a3 11 bt %rdx,(%rcx)
> 271a: 19 d2 sbb %edx,%edx
> 271c: 85 d2 test %edx,%edx
> 271e: 0f 84 c7 fe ff ff je 25eb <__call_rcu+0xbb>
>
> After:
>
> bool cpu_up = cpu_online(smp_processor_id());
> 26c1: 65 8b 14 25 00 00 00 mov %gs:0x0,%edx
> 26c8: 00
> 26c5: R_X86_64_32S cpu_number
> 26c9: 48 8b 0d 00 00 00 00 mov 0x0(%rip),%rcx # 26d0 <__call_rcu+0x1a0>
> 26cc: R_X86_64_PC32 cpu_online_mask-0x4
> 26d0: 89 d2 mov %edx,%edx
> 26d2: 48 0f a3 11 bt %rdx,(%rcx)
> 26d6: 19 d2 sbb %edx,%edx
> 26d8: 85 d2 test %edx,%edx
> 26da: 41 0f 95 c4 setne %r12b
>
> if (!rcu_is_watching() && cpu_up)
> 26f0: 83 e2 01 and $0x1,%edx
> 26f3: 75 09 jne 26fe <__call_rcu+0x1ce>
> 26f5: 45 84 e4 test %r12b,%r12b
> 26f8: 0f 85 12 02 00 00 jne 2910 <__call_rcu+0x3e0>
> invoke_rcu_core();
>
> /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> if (irqs_disabled_flags(flags) || !cpu_up)
> 26fe: 48 f7 45 d0 00 02 00 testq $0x200,-0x30(%rbp)
> 2705: 00
> 2706: 0f 84 df fe ff ff je 25eb <__call_rcu+0xbb>
> 270c: 45 84 e4 test %r12b,%r12b
> 270f: 0f 84 d6 fe ff ff je 25eb <__call_rcu+0xbb>
>
> --
> Pranith
>
next prev parent reply other threads:[~2014-07-23 20:23 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-23 5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
2014-07-23 5:09 ` [PATCH 01/16] rcu: Use rcu_num_nodes instead of NUM_RCU_NODES Pranith Kumar
2014-07-23 5:09 ` [PATCH 02/16] rcu: Check return value for cpumask allocation Pranith Kumar
2014-07-23 12:06 ` Paul E. McKenney
2014-07-23 12:49 ` Pranith Kumar
2014-07-23 17:14 ` Pranith Kumar
2014-07-23 18:01 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 03/16] rcu: Fix comment for gp_state field values Pranith Kumar
2014-07-23 5:09 ` [PATCH 04/16] rcu: Remove redundant check for an online CPU Pranith Kumar
2014-07-23 12:09 ` Paul E. McKenney
2014-07-23 13:23 ` Pranith Kumar
2014-07-23 13:41 ` Paul E. McKenney
2014-07-23 14:01 ` Pranith Kumar
2014-07-23 14:14 ` Paul E. McKenney
2014-07-23 15:07 ` Pranith Kumar
2014-07-23 15:21 ` Pranith Kumar
2014-07-23 5:09 ` [PATCH 05/16] rcu: Add noreturn attribute to boost kthread Pranith Kumar
2014-07-23 5:09 ` [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp Pranith Kumar
2014-07-23 12:13 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup() Pranith Kumar
2014-07-23 12:16 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 08/16] rcu: Clean up rcu_spawn_one_boost_kthread() Pranith Kumar
2014-07-23 5:09 ` [PATCH 09/16] rcu: Remove redundant check for online cpu Pranith Kumar
2014-07-23 12:21 ` Paul E. McKenney
2014-07-23 12:59 ` Pranith Kumar
2014-07-23 13:50 ` Paul E. McKenney
2014-07-23 14:12 ` Pranith Kumar
2014-07-23 14:23 ` Paul E. McKenney
2014-07-23 15:11 ` Pranith Kumar
2014-07-23 15:30 ` Paul E. McKenney
2014-07-23 15:44 ` Pranith Kumar
2014-07-23 19:15 ` Paul E. McKenney
2014-07-23 20:01 ` Pranith Kumar
2014-07-23 20:16 ` Pranith Kumar
2014-07-23 20:23 ` Paul E. McKenney [this message]
2014-07-23 5:09 ` [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup Pranith Kumar
2014-07-23 12:23 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 11/16] rcu: Check for spurious wakeup using return value Pranith Kumar
2014-07-23 12:26 ` Paul E. McKenney
2014-07-24 2:36 ` Pranith Kumar
2014-07-24 3:43 ` Paul E. McKenney
2014-07-24 4:03 ` Pranith Kumar
2014-07-24 18:12 ` Paul E. McKenney
2014-07-24 19:59 ` Pranith Kumar
2014-07-24 20:27 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 12/16] rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads() Pranith Kumar
2014-07-23 5:09 ` [PATCH 13/16] rcu: Spawn nocb kthreads from rcu_prepare_kthreads() Pranith Kumar
2014-07-23 5:09 ` [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active Pranith Kumar
2014-07-23 12:27 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads Pranith Kumar
2014-07-23 12:28 ` Paul E. McKenney
2014-07-23 13:14 ` Pranith Kumar
2014-07-23 13:42 ` Paul E. McKenney
2014-07-23 5:09 ` [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument Pranith Kumar
2014-07-23 12:31 ` Paul E. McKenney
2014-07-23 14:45 ` [PATCH 00/16] rcu: Some minor fixes and cleanups Paul E. McKenney
2014-08-27 1:10 ` Pranith Kumar
2014-08-27 3:20 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140723202320.GF11241@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=bobby.prani@gmail.com \
--cc=josh@joshtriplett.org \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).