From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr1-f65.google.com ([209.85.221.65]:46996 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727490AbeIZSi2 (ORCPT ); Wed, 26 Sep 2018 14:38:28 -0400 Received: by mail-wr1-f65.google.com with SMTP id z3-v6so14976547wrr.13 for ; Wed, 26 Sep 2018 05:25:41 -0700 (PDT) Date: Wed, 26 Sep 2018 14:25:37 +0200 From: Miklos Szeredi To: Kirill Tkhai Cc: linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 6/6] fuse: Do not take fuse_conn::lock on fuse_request_send_background() Message-ID: <20180926122537.GC23439@veci.piliscsaba.redhat.com> References: <153538208536.18303.10732945923322972743.stgit@localhost.localdomain> <153538379617.18303.11871598131511120870.stgit@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <153538379617.18303.11871598131511120870.stgit@localhost.localdomain> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Mon, Aug 27, 2018 at 06:29:56PM +0300, Kirill Tkhai wrote: > Currently, we take fc->lock there only to check for fc->connected. > But this flag is changed only on connection abort, which is very > rare operation. Good thing looks to make fuse_request_send_background() > faster, while fuse_abort_conn() slowler. > > So, we make fuse_request_send_background() lockless and mark > (fc->connected == 1) region as RCU-protected. Abort function > just uses synchronize_sched() to wait till all pending background > requests is being queued, and then makes ordinary abort. > > Note, that synchronize_sched() is used instead of synchronize_rcu(), > since we want to check for fc->connected without rcu_dereference() > in fuse_request_send_background() (i.e., not to add memory barriers > to this hot path). Apart from the inaccuracies in the above (_sched variant is for scheduling and NMI taking code; _sched variant requires rcu_dereference() as well; rcu_dereference() does not add barriers; rcu_dereference() is only for pointers, so we can't use it for an integer), wouldn't it be simpler to just use bg_lock for checking ->connected, and lock bg_lock (as well as fc->lock) when setting ->connected? Updated patch below (untested). Thanks, Miklos --- Subject: fuse: do not take fc->lock in fuse_request_send_background() From: Kirill Tkhai Date: Mon, 27 Aug 2018 18:29:56 +0300 Currently, we take fc->lock there only to check for fc->connected. But this flag is changed only on connection abort, which is very rare operation. Signed-off-by: Kirill Tkhai Signed-off-by: Miklos Szeredi --- fs/fuse/dev.c | 46 +++++++++++++++++++++++----------------------- fs/fuse/file.c | 4 +++- fs/fuse/fuse_i.h | 4 +--- 3 files changed, 27 insertions(+), 27 deletions(-) --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -574,42 +574,38 @@ ssize_t fuse_simple_request(struct fuse_ return ret; } -/* - * Called under fc->lock - * - * fc->connected must have been checked previously - */ -void fuse_request_send_background_nocheck(struct fuse_conn *fc, - struct fuse_req *req) +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req) { - BUG_ON(!test_bit(FR_BACKGROUND, &req->flags)); + bool queued = false; + + WARN_ON(!test_bit(FR_BACKGROUND, &req->flags)); if (!test_bit(FR_WAITING, &req->flags)) { __set_bit(FR_WAITING, &req->flags); atomic_inc(&fc->num_waiting); } __set_bit(FR_ISREPLY, &req->flags); spin_lock(&fc->bg_lock); - fc->num_background++; - if (fc->num_background == fc->max_background) - fc->blocked = 1; - if (fc->num_background == fc->congestion_threshold && fc->sb) { - set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); - set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); + if (likely(fc->connected)) { + fc->num_background++; + if (fc->num_background == fc->max_background) + fc->blocked = 1; + if (fc->num_background == fc->congestion_threshold && fc->sb) { + set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC); + set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC); + } + list_add_tail(&req->list, &fc->bg_queue); + flush_bg_queue(fc); + queued = true; } - list_add_tail(&req->list, &fc->bg_queue); - flush_bg_queue(fc); spin_unlock(&fc->bg_lock); + + return queued; } void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req) { - BUG_ON(!req->end); - spin_lock(&fc->lock); - if (fc->connected) { - fuse_request_send_background_nocheck(fc, req); - spin_unlock(&fc->lock); - } else { - spin_unlock(&fc->lock); + WARN_ON(!req->end); + if (!fuse_request_queue_background(fc, req)) { req->out.h.error = -ENOTCONN; req->end(fc, req); fuse_put_request(fc, req); @@ -2112,7 +2108,11 @@ void fuse_abort_conn(struct fuse_conn *f struct fuse_req *req, *next; LIST_HEAD(to_end); + /* Background queuing checks fc->connected under bg_lock */ + spin_lock(&fc->bg_lock); fc->connected = 0; + spin_unlock(&fc->bg_lock); + fc->aborted = is_abort; fuse_set_initialized(fc); list_for_each_entry(fud, &fc->devices, entry) { --- a/fs/fuse/fuse_i.h +++ b/fs/fuse/fuse_i.h @@ -863,9 +863,7 @@ ssize_t fuse_simple_request(struct fuse_ * Send a request in the background */ void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req); - -void fuse_request_send_background_nocheck(struct fuse_conn *fc, - struct fuse_req *req); +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req); /* Abort all requests */ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort); --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -1487,6 +1487,7 @@ __acquires(fc->lock) struct fuse_inode *fi = get_fuse_inode(req->inode); struct fuse_write_in *inarg = &req->misc.write.in; __u64 data_size = req->num_pages * PAGE_SIZE; + bool queued; if (!fc->connected) goto out_free; @@ -1502,7 +1503,8 @@ __acquires(fc->lock) req->in.args[1].size = inarg->size; fi->writectr++; - fuse_request_send_background_nocheck(fc, req); + queued = fuse_request_queue_background(fc, req); + WARN_ON(!queued); return; out_free: