linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: Stanislav Kinsbursky <skinsbursky@parallels.com>
Cc: Trond.Myklebust@netapp.com, linux-nfs@vger.kernel.org,
	linux-kernel@vger.kernel.org, devel@openvz.org
Subject: Re: [PATCH v3 0/2] SUNRPC: separate per-net data creation from service
Date: Wed, 2 May 2012 17:58:37 -0400	[thread overview]
Message-ID: <20120502215837.GA12089@fieldses.org> (raw)
In-Reply-To: <20120502120536.8794.22210.stgit@localhost.localdomain>

Thanks, applying.

--b.

On Wed, May 02, 2012 at 04:08:29PM +0400, Stanislav Kinsbursky wrote:
> creation
> 
> v3: "SUNRPC: new svc_bind() routine introduced" patch was squashed with the
>     "SUNRPC: check rpcbind clients usage counter before decrement" patch.
> 
> v2: Increase per-net usage counted in lockd_up_net().
> 
> This is a cleanup patch set.
> It will be followed my LockD start/stop cleanup patch set and NFS callback
> service containerization patch set (yes, I forgot to implement it).
> 
> Today per-net data is created with service, and then is service is starting in
> other network namespace. And thus it's destroyed with service too. Moreover,
> network context for destroying of per-net data is taken from current process.
> This is correct, but code looks ugly.
> This patch set separates per-net data allocation from service allocation and
> destruction.
> IOW, per-net data have to be destroyed by service users - not service itself.
> 
> BTW, NFSd code become uglier with this patch set. Sorry.
> But I assume, that these new ugly parts will be replaced later by NFSd service
> containerization code.
> 
> The following series implements...
> 
> ---
> 
> Stanislav Kinsbursky (2):
>       SUNRPC: new svc_bind() routine introduced
>       SUNRPC: move per-net operations from svc_destroy()
> 
> 
>  fs/lockd/svc.c             |   33 +++++++++++++++++++++------------
>  fs/nfs/callback.c          |   11 +++++++++++
>  fs/nfsd/nfsctl.c           |    4 ++++
>  fs/nfsd/nfssvc.c           |   16 ++++++++++++++++
>  include/linux/sunrpc/svc.h |    1 +
>  net/sunrpc/rpcb_clnt.c     |   12 +++++++-----
>  net/sunrpc/svc.c           |   23 ++++++++++-------------
>  7 files changed, 70 insertions(+), 30 deletions(-)
> 

  parent reply	other threads:[~2012-05-02 21:58 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-02 12:08 [PATCH v3 0/2] SUNRPC: separate per-net data creation from service Stanislav Kinsbursky
2012-05-02 12:08 ` [PATCH v3 1/2] SUNRPC: new svc_bind() routine introduced Stanislav Kinsbursky
2012-05-02 12:08 ` [PATCH v3 2/2] SUNRPC: move per-net operations from svc_destroy() Stanislav Kinsbursky
2012-05-04  8:49   ` [PATCH v4] " Stanislav Kinsbursky
2012-05-02 21:58 ` J. Bruce Fields [this message]
2012-05-03 14:27   ` [PATCH v3 0/2] SUNRPC: separate per-net data creation from service J. Bruce Fields
2012-05-04  8:43     ` Stanislav Kinsbursky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120502215837.GA12089@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=Trond.Myklebust@netapp.com \
    --cc=devel@openvz.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=skinsbursky@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).