From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nautica.notk.org ([91.121.71.147]:47528 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389627AbeHARJS (ORCPT ); Wed, 1 Aug 2018 13:09:18 -0400 Date: Wed, 1 Aug 2018 17:22:48 +0200 From: Dominique Martinet To: Greg Kurz Cc: v9fs-developer@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, Matthew Wilcox , linux-kernel@vger.kernel.org Subject: Re: [V9fs-developer] [PATCH 2/2] net/9p: add a per-client fcall kmem_cache Message-ID: <20180801152248.GB21463@nautica> References: <20180730093101.GA7894@nautica> <1532943263-24378-1-git-send-email-asmadeus@codewreck.org> <1532943263-24378-2-git-send-email-asmadeus@codewreck.org> <20180801162824.31fb6a30@bahia.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20180801162824.31fb6a30@bahia.lan> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Greg Kurz wrote on Wed, Aug 01, 2018: > > diff --git a/net/9p/client.c b/net/9p/client.c > > index ba99a94a12c9..215e3b1ed7b4 100644 > > --- a/net/9p/client.c > > +++ b/net/9p/client.c > > @@ -231,15 +231,34 @@ static int parse_opts(char *opts, struct p9_client *clnt) > > return ret; > > } > > > > -static int p9_fcall_alloc(struct p9_fcall *fc, int alloc_msize) > > +static int p9_fcall_alloc(struct p9_client *c, struct p9_fcall *fc, > > + int alloc_msize) > > { > > - fc->sdata = kmalloc(alloc_msize, GFP_NOFS); > > + if (c->fcall_cache && alloc_msize == c->msize) > > This is a presumably hot path for any request but the initial TVERSION, > you probably want likely() here... c->fcall_cache is indeed very likely, but alloc_size == c->msize not so much, as zc requests will be quite common for virtio and will be 4k in size. Although with that cache I'm starting to wonder if we should always use it... Speed-wise if there is no memory pressure the cache is likely going to be faster. If there is pressure and the items are reclaimed though that'll bring some heavier slow-down as it'll need to find bigger memory regions. I'm not sure which path we should favor, tbh. I'll keep these separate for now. For the first part of the check, Matthew suggested trying to trick msize into a different value to fail this check for the initial TVERSION call, but even after thinking it a bit I don't really see how to do that cleanly. I can at least make -that- likely()... > > > + fc->sdata = kmem_cache_alloc(c->fcall_cache, GFP_NOFS); > > + else > > + fc->sdata = kmalloc(alloc_msize, GFP_NOFS); > > if (!fc->sdata) > > return -ENOMEM; > > fc->capacity = alloc_msize; > > return 0; > > } > > > > +void p9_fcall_free(struct p9_client *c, struct p9_fcall *fc) > > +{ > > + /* sdata can be NULL for interrupted requests in trans_rdma, > > + * and kmem_cache_free does not do NULL-check for us > > + */ > > + if (unlikely(!fc->sdata)) > > + return; > > + > > + if (c->fcall_cache && fc->capacity == c->msize) > > ... and here as well. For this one I'll unfortunately need to store in the fc how it has been allocated as slob doesn't allow to kmem_cache_free() a buffer allocated by kmalloc and in prevision of refs being refcounted in a hostile world the initial TVERSION req could be freed after fcall_cache is created :/ That's a bit of a burden but at least will reduce the checks to one here > > + kmem_cache_free(c->fcall_cache, fc->sdata); > > + else > > + kfree(fc->sdata); > > +} > > +EXPORT_SYMBOL(p9_fcall_free); > > + > > static struct kmem_cache *p9_req_cache; > > > > /** Anyway I've had as many comments as I could hope for, thanks everyone for the quick review. I'll send a v2 of both patches tomorrow -- Dominique