rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: ebiederm@xmission.com (Eric W. Biederman)
To: Giuseppe Scrivano <gscrivan@redhat.com>
Cc: linux-kernel@vger.kernel.org, rcu@vger.kernel.org,
	paulmck@kernel.org, viro@zeniv.linux.org.uk
Subject: Re: [PATCH v2] ipc: use a work queue to free_ipc
Date: Mon, 24 Feb 2020 10:10:25 -0600	[thread overview]
Message-ID: <87v9nw2cge.fsf@x220.int.ebiederm.org> (raw)
In-Reply-To: <871rqlt9fu.fsf@redhat.com> (Giuseppe Scrivano's message of "Sun, 23 Feb 2020 20:01:09 +0100")

Giuseppe Scrivano <gscrivan@redhat.com> writes:

> ebiederm@xmission.com (Eric W. Biederman) writes:
>
>> Giuseppe Scrivano <gscrivan@redhat.com> writes:
>>
>>> it avoids blocking on synchronize_rcu() in kern_umount().
>>>
>>> the code:
>>>
>>> \#define _GNU_SOURCE
>>> \#include <sched.h>
>>> \#include <error.h>
>>> \#include <errno.h>
>>> \#include <stdlib.h>
>>> int main()
>>> {
>>>   int i;
>>>   for (i  = 0; i < 1000; i++)
>>>     if (unshare (CLONE_NEWIPC) < 0)
>>>       error (EXIT_FAILURE, errno, "unshare");
>>> }
>>>
>>> gets from:
>>>
>>> 	Command being timed: "./ipc-namespace"
>>> 	User time (seconds): 0.00
>>> 	System time (seconds): 0.06
>>> 	Percent of CPU this job got: 0%
>>> 	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:08.05
>>>
>>> to:
>>>
>>> 	Command being timed: "./ipc-namespace"
>>> 	User time (seconds): 0.00
>>> 	System time (seconds): 0.02
>>> 	Percent of CPU this job got: 96%
>>> 	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03
>>
>> I have a question.  You create 1000 namespaces in a single process
>> and then free them.  So I expect that single process is busy waiting
>> for that kern_umount 1000 types, and waiting for 1000 synchronize_rcu's.
>>
>> Does this ever show up in a real world work-load?
>>
>> Is the cost of a single synchronize_rcu a problem?
>
> yes exactly, creating 1000 namespaces is not a real world use case (at
> least in my experience) but I've used it only to show the impact of the
> patch.

I know running 1000 containers is a real use case, and I would not be
surprised if their are configurations that go higher.

> The cost of the single synchronize_rcu is the issue.
>
> Most containers run in their own IPC namespace, so this is a constant
> cost for each container.

Agreed.

>> The code you are working to avoid is this.
>>
>> void kern_unmount(struct vfsmount *mnt)
>> {
>> 	/* release long term mount so mount point can be released */
>> 	if (!IS_ERR_OR_NULL(mnt)) {
>> 		real_mount(mnt)->mnt_ns = NULL;
>> 		synchronize_rcu();	/* yecchhh... */
>> 		mntput(mnt);
>> 	}
>> }
>>
>> Which makes me wonder if perhaps there might be a simpler solution
>> involving just that code.  But I do realize such a solution
>> would require analyzing all of the code after kern_unmount
>> to see if any of it depends upon the synchronize_rcu.
>>
>>
>> In summary, I see no correctness problems with your code.
>> Code that runs faster is always nice.  In this case I just
>> see the cost being shifted somewhere else not eliminated.
>> I also see a slight increase in complexity.
>>
>> So I am wondering if this was an exercise to speed up a toy
>> benchmark or if this is an effort to speed of real world code.
>
> I've seen the issue while profiling real world work loads.

So the question is how to remove this delay.

>> At the very least some version of the motivation needs to be
>> recorded so that the next time some one comes in an reworks
>> the code they can look in the history and figure out what
>> they need to do to avoid introducing a regeression.
>
> Is it enough in the git commit message or should it be an inline
> comment?

The git commit message should be enough to record the motivation.

A comment in the code that about the work queue that says something
like "used to avoid the cost of synchronize_rcu in kern_unmount" would
also be nice.

Eric

      reply	other threads:[~2020-02-24 16:12 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-17 18:36 [PATCH v2] ipc: use a work queue to free_ipc Giuseppe Scrivano
2020-02-17 18:56 ` Paul E. McKenney
2020-02-21 19:37 ` Eric W. Biederman
2020-02-23 19:01   ` Giuseppe Scrivano
2020-02-24 16:10     ` Eric W. Biederman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87v9nw2cge.fsf@x220.int.ebiederm.org \
    --to=ebiederm@xmission.com \
    --cc=gscrivan@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).