From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nautica.notk.org ([91.121.71.147]:54752 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727045AbeGaDN5 (ORCPT ); Mon, 30 Jul 2018 23:13:57 -0400 Date: Tue, 31 Jul 2018 03:35:56 +0200 From: Dominique Martinet To: piaojun Cc: v9fs-developer@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, Greg Kurz , Matthew Wilcox , linux-kernel@vger.kernel.org Subject: Re: [V9fs-developer] [PATCH 2/2] net/9p: add a per-client fcall kmem_cache Message-ID: <20180731013556.GA1530@nautica> References: <20180730093101.GA7894@nautica> <1532943263-24378-1-git-send-email-asmadeus@codewreck.org> <1532943263-24378-2-git-send-email-asmadeus@codewreck.org> <5B5FB8F0.6020908@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <5B5FB8F0.6020908@huawei.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: piaojun wrote on Tue, Jul 31, 2018: > Could you help paste some test result before-and-after the patch applied? The only performance tests I did were sent to the list a couple of mails earlier, you can find it here: http://lkml.kernel.org/r/20180730093101.GA7894@nautica In particular, the results for benchmark on small writes just before and after this patch, without KASAN (these are the same numbers as in the link, hardware/setup is described there): - no alloc (4.18-rc7 request cache): 65.4k req/s - non-power of two alloc, no patch: 61.6k req/s - power of two alloc, no patch: 62.2k req/s - non-power of two alloc, with patch: 64.7k req/s - power of two alloc, with patch: 65.1k req/s I'm rather happy with the result, I didn't expect using a dedicated cache would bring this much back but it's certainly worth it. > > @@ -1011,6 +1034,7 @@ void p9_client_destroy(struct p9_client *clnt) > > > > p9_tag_cleanup(clnt); > > > > + kmem_cache_destroy(clnt->fcall_cache); > > We could set NULL for fcall_cache in case of use-after-free. > > > kfree(clnt); Hmm, I understand where this comes from, but I'm not sure I agree. If someone tries to access the client while/after it is freed things are going to break anyway, I'd rather let things break as obviously as possible than try to cover it up. -- Dominique