From: Joel Fernandes <joel@joelfernandes.org>
To: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: linux-kernel@vger.kernel.org, max.byungchul.park@gmail.com,
byungchul.park@lge.com, Davidlohr Bueso <dave@stgolabs.net>,
Josh Triplett <josh@joshtriplett.org>,
kernel-team@android.com, kernel-team@lge.com,
Lai Jiangshan <jiangshanlai@gmail.com>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Rao Shoaib <rao.shoaib@oracle.com>,
rcu@vger.kernel.org, Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [PATCH RFC v1 2/2] rcuperf: Add kfree_rcu performance Tests
Date: Sat, 10 Aug 2019 22:01:54 -0400 [thread overview]
Message-ID: <20190811020154.GA74292@google.com> (raw)
In-Reply-To: <20190807002915.GV28441@linux.ibm.com>
On Tue, Aug 06, 2019 at 05:29:15PM -0700, Paul E. McKenney wrote:
> On Tue, Aug 06, 2019 at 05:20:41PM -0400, Joel Fernandes (Google) wrote:
> > This test runs kfree_rcu in a loop to measure performance of the new
> > kfree_rcu, with and without patch.
> >
> > To see improvement, run with boot parameters:
> > rcuperf.kfree_loops=2000 rcuperf.kfree_alloc_num=100 rcuperf.perf_type=kfree
> >
> > Without patch, test runs in 6.9 seconds.
> > With patch, test runs in 6.1 seconds (+13% improvement)
> >
> > If it is desired to run the test but with the traditional (non-batched)
> > kfree_rcu, for example to compare results, then you could pass along the
> > rcuperf.kfree_no_batch=1 boot parameter.
>
> You lost me on this one. You ran two runs, with rcuperf.kfree_no_batch=1
> and without? Or you ran this patch both with and without the earlier
> patch, and could have run with the patch and rcuperf.kfree_no_batch=1?
>
> If the latter, it would be good to try all three.
Did this in new patch, will post shortly.
[snip]
> > +torture_param(int, kfree_nthreads, -1, "Number of RCU reader threads");
> > +torture_param(int, kfree_alloc_num, 8000, "Number of allocations and frees done by a thread");
> > +torture_param(int, kfree_alloc_size, 16, "Size of each allocation");
>
> Is this used? How does it relate to KFREE_OBJ_BYTES?
It is not used, I removed it.
> > +torture_param(int, kfree_loops, 10, "Size of each allocation");
>
> I suspect that this kfree_loops string is out of date.
Updated, thanks.
> > +torture_param(int, kfree_no_batch, 0, "Use the non-batching (slower) version of kfree_rcu");
>
> All of these need to be added to kernel-parameters.txt. Along with
> any added by the earlier patch, for that matter.
Will do.
> > +static struct task_struct **kfree_reader_tasks;
> > +static int kfree_nrealthreads;
> > +static atomic_t n_kfree_perf_thread_started;
> > +static atomic_t n_kfree_perf_thread_ended;
> > +
> > +#define KFREE_OBJ_BYTES 8
> > +
> > +struct kfree_obj {
> > + char kfree_obj[KFREE_OBJ_BYTES];
> > + struct rcu_head rh;
> > +};
> > +
> > +void kfree_call_rcu_nobatch(struct rcu_head *head, rcu_callback_t func);
> > +
> > +static int
> > +kfree_perf_thread(void *arg)
> > +{
> > + int i, l = 0;
>
> It is really easy to confuse "l" and "1" in some fonts, so please use
> a different name. (From the "showing my age" department: On typical
> 1970s typewriters, there was no numeral "1" -- you typed the letter
> "l" instead, thus anticipating at least the first digit of "1337".)
Change l to loops ;). I did see typewriters around in my childhood, I thought
they were pretty odd machines :-D. I am sure my daughter will think the same
about land-line phones :-D
> > + long me = (long)arg;
> > + struct kfree_obj **alloc_ptrs;
> > + u64 start_time, end_time;
> > +
> > + VERBOSE_PERFOUT_STRING("kfree_perf_thread task started");
> > + set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids));
> > + set_user_nice(current, MAX_NICE);
> > + atomic_inc(&n_kfree_perf_thread_started);
> > +
> > + alloc_ptrs = (struct kfree_obj **)kmalloc(sizeof(struct kfree_obj *) * kfree_alloc_num,
> > + GFP_KERNEL);
> > + if (!alloc_ptrs)
> > + return -ENOMEM;
> > +
> > + start_time = ktime_get_mono_fast_ns();
>
> Don't you want to announce that you started here rather than above in
> order to avoid (admittedly slight) measurement inaccuracies?
Yes, in revised patch I am announcing here.
> > + do {
> > + for (i = 0; i < kfree_alloc_num; i++) {
> > + alloc_ptrs[i] = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
> > + if (!alloc_ptrs[i])
> > + return -ENOMEM;
> > + }
> > +
> > + for (i = 0; i < kfree_alloc_num; i++) {
> > + if (!kfree_no_batch) {
> > + kfree_rcu(alloc_ptrs[i], rh);
> > + } else {
> > + rcu_callback_t cb;
> > +
> > + cb = (rcu_callback_t)(unsigned long)offsetof(struct kfree_obj, rh);
> > + kfree_call_rcu_nobatch(&(alloc_ptrs[i]->rh), cb);
> > + }
> > + }
> > +
> > + schedule_timeout_uninterruptible(2);
>
> Why the two-jiffy wait in the middle of a timed test? Yes, you need
> a cond_resched() and maybe more here, but a two-jiffy wait? I don't
> see how this has any chance of getting valid measurements.
>
> What am I missing here?
Replace it with cond_resched() as we discussed.
> > + } while (!torture_must_stop() && ++l < kfree_loops);
> > +
> > + kfree(alloc_ptrs);
> > +
> > + if (atomic_inc_return(&n_kfree_perf_thread_ended) >= kfree_nrealthreads) {
> > + end_time = ktime_get_mono_fast_ns();
>
> Don't we want to capture the end time before the kfree()?
Fixed.
> > + pr_alert("Total time taken by all kfree'ers: %llu ns, loops: %d\n",
> > + (unsigned long long)(end_time - start_time), kfree_loops);
> > + if (shutdown) {
> > + smp_mb(); /* Assign before wake. */
> > + wake_up(&shutdown_wq);
> > + }
> > + }
> > +
> > + torture_kthread_stopping("kfree_perf_thread");
> > + return 0;
> > +}
> > +
> > +static void
> > +kfree_perf_cleanup(void)
> > +{
> > + int i;
> > +
> > + if (torture_cleanup_begin())
> > + return;
> > +
> > + if (kfree_reader_tasks) {
> > + for (i = 0; i < kfree_nrealthreads; i++)
> > + torture_stop_kthread(kfree_perf_thread,
> > + kfree_reader_tasks[i]);
> > + kfree(kfree_reader_tasks);
> > + }
> > +
> > + torture_cleanup_end();
> > +}
> > +
> > +/*
> > + * shutdown kthread. Just waits to be awakened, then shuts down system.
> > + */
> > +static int
> > +kfree_perf_shutdown(void *arg)
> > +{
> > + do {
> > + wait_event(shutdown_wq,
> > + atomic_read(&n_kfree_perf_thread_ended) >=
> > + kfree_nrealthreads);
> > + } while (atomic_read(&n_kfree_perf_thread_ended) < kfree_nrealthreads);
> > +
> > + smp_mb(); /* Wake before output. */
> > +
> > + kfree_perf_cleanup();
> > + kernel_power_off();
> > + return -EINVAL;
> > +}
>
> Is there some way to avoid (almost) duplicating rcu_perf_shutdown()?
At the moment, I don't see a good way to do this without passing in function
pointers or using macros which I think would look uglier than the above
addition. Sorry.
thanks,
- Joel
next prev parent reply other threads:[~2019-08-11 2:02 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-06 21:20 [PATCH RFC v1 1/2] rcu/tree: Add basic support for kfree_rcu batching Joel Fernandes (Google)
2019-08-06 21:20 ` [PATCH RFC v1 2/2] rcuperf: Add kfree_rcu performance Tests Joel Fernandes (Google)
2019-08-07 0:29 ` Paul E. McKenney
2019-08-07 10:22 ` Joel Fernandes
2019-08-07 17:56 ` Paul E. McKenney
2019-08-09 16:01 ` Joel Fernandes
2019-08-11 2:01 ` Joel Fernandes [this message]
2019-08-11 23:42 ` Paul E. McKenney
2019-08-06 23:56 ` [PATCH RFC v1 1/2] rcu/tree: Add basic support for kfree_rcu batching Paul E. McKenney
2019-08-07 9:45 ` Joel Fernandes
2019-08-07 17:52 ` Paul E. McKenney
2019-08-08 9:52 ` Byungchul Park
2019-08-08 12:56 ` Joel Fernandes
2019-08-08 14:23 ` Byungchul Park
2019-08-08 18:09 ` Paul E. McKenney
2019-08-11 8:36 ` Byungchul Park
2019-08-11 8:49 ` Byungchul Park
2019-08-11 23:49 ` Paul E. McKenney
2019-08-12 10:10 ` Byungchul Park
2019-08-12 13:12 ` Joel Fernandes
2019-08-13 5:29 ` Byungchul Park
2019-08-13 15:41 ` Paul E. McKenney
2019-08-14 0:11 ` Byungchul Park
2019-08-14 2:53 ` Paul E. McKenney
2019-08-14 3:43 ` Byungchul Park
2019-08-14 16:59 ` Paul E. McKenney
2019-08-11 10:37 ` Byungchul Park
2019-08-08 23:30 ` Joel Fernandes
2019-08-09 15:16 ` Paul E. McKenney
2019-08-09 15:39 ` Joel Fernandes
2019-08-09 16:33 ` Paul E. McKenney
2019-08-09 20:22 ` Joel Fernandes
2019-08-09 20:26 ` Joel Fernandes
2019-08-09 21:25 ` Joel Fernandes
2019-08-10 3:38 ` Paul E. McKenney
2019-08-09 20:29 ` Joel Fernandes
2019-08-09 20:42 ` Paul E. McKenney
2019-08-09 21:36 ` Joel Fernandes
2019-08-10 3:40 ` Paul E. McKenney
2019-08-10 3:52 ` Joel Fernandes
2019-08-10 2:42 ` Joel Fernandes
2019-08-10 3:38 ` Paul E. McKenney
2019-08-10 4:20 ` Joel Fernandes
2019-08-10 18:24 ` Paul E. McKenney
2019-08-11 2:26 ` Joel Fernandes
2019-08-11 23:35 ` Paul E. McKenney
2019-08-12 13:13 ` Joel Fernandes
2019-08-12 14:44 ` Paul E. McKenney
2019-08-08 10:26 ` Byungchul Park
2019-08-08 18:11 ` Paul E. McKenney
2019-08-08 20:13 ` Joel Fernandes
2019-08-08 20:51 ` Paul E. McKenney
2019-08-08 22:34 ` Joel Fernandes
2019-08-08 22:37 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190811020154.GA74292@google.com \
--to=joel@joelfernandes.org \
--cc=byungchul.park@lge.com \
--cc=dave@stgolabs.net \
--cc=jiangshanlai@gmail.com \
--cc=josh@joshtriplett.org \
--cc=kernel-team@android.com \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=max.byungchul.park@gmail.com \
--cc=paulmck@linux.ibm.com \
--cc=rao.shoaib@oracle.com \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).