From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Christoph Hellwig To: viro@zeniv.linux.org.uk Cc: Avi Kivity , linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] aio: allow direct aio poll comletions for keyed wakeups Date: Mon, 30 Jul 2018 09:15:44 +0200 Message-Id: <20180730071544.23998-5-hch@lst.de> In-Reply-To: <20180730071544.23998-1-hch@lst.de> References: <20180730071544.23998-1-hch@lst.de> Sender: linux-kernel-owner@vger.kernel.org List-ID: If we get a keyed wakeup for a aio poll waitqueue and wake can acquire the ctx_lock without spinning we can just complete the iocb straight from the wakeup callback to avoid a context switch. Signed-off-by: Christoph Hellwig --- fs/aio.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index 6993684d0665..158c5e41b17c 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -1674,11 +1674,25 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync, void *key) { struct poll_iocb *req = container_of(wait, struct poll_iocb, wait); + struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll); __poll_t mask = key_to_poll(key); /* for instances that support it check for an event match first: */ - if (mask && !(mask & req->events)) - return 0; + if (mask) { + if (!(mask & req->events)) + return 0; + + /* try to complete the iocb inline if we can: */ + if (spin_trylock(&iocb->ki_ctx->ctx_lock)) { + req->done = true; + list_del(&iocb->ki_list); + spin_unlock(&iocb->ki_ctx->ctx_lock); + + list_del_init(&req->wait.entry); + aio_poll_complete(iocb, mask); + return 1; + } + } list_del_init(&req->wait.entry); schedule_work(&req->work); -- 2.18.0