From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Mon, 6 Aug 2018 15:27:05 -0700 From: Andrew Morton To: Christoph Hellwig Cc: viro@zeniv.linux.org.uk, Avi Kivity , Linus Torvalds , linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 4/4] aio: allow direct aio poll comletions for keyed wakeups Message-Id: <20180806152705.37809e16c02543cc24626607@linux-foundation.org> In-Reply-To: <20180806083058.14724-5-hch@lst.de> References: <20180806083058.14724-1-hch@lst.de> <20180806083058.14724-5-hch@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: On Mon, 6 Aug 2018 10:30:58 +0200 Christoph Hellwig wrote: > If we get a keyed wakeup for a aio poll waitqueue and wake can acquire the > ctx_lock without spinning we can just complete the iocb straight from the > wakeup callback to avoid a context switch. Why do we try to avoid spinning on the lock? > --- a/fs/aio.c > +++ b/fs/aio.c > @@ -1672,13 +1672,26 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync, > void *key) > { > struct poll_iocb *req = container_of(wait, struct poll_iocb, wait); > + struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll); > __poll_t mask = key_to_poll(key); > > req->woken = true; > > /* for instances that support it check for an event match first: */ > - if (mask && !(mask & req->events)) > - return 0; > + if (mask) { > + if (!(mask & req->events)) > + return 0; > + > + /* try to complete the iocb inline if we can: */ ie, this comment explains 'what" but not "why". (There's a typo in Subject:, btw) > + if (spin_trylock(&iocb->ki_ctx->ctx_lock)) { > + list_del(&iocb->ki_list); > + spin_unlock(&iocb->ki_ctx->ctx_lock); > + > + list_del_init(&req->wait.entry); > + aio_poll_complete(iocb, mask); > + return 1; > + } > + } > > list_del_init(&req->wait.entry); > schedule_work(&req->work);