From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org ([198.137.202.133]:40654 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934256AbeEWTU3 (ORCPT ); Wed, 23 May 2018 15:20:29 -0400 From: Christoph Hellwig To: viro@zeniv.linux.org.uk Cc: Avi Kivity , linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, stable@kernel.org Subject: [PATCH 01/33] fix io_destroy()/aio_complete() race Date: Wed, 23 May 2018 21:19:50 +0200 Message-Id: <20180523192022.1703-2-hch@lst.de> In-Reply-To: <20180523192022.1703-1-hch@lst.de> References: <20180523192022.1703-1-hch@lst.de> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: From: Al Viro If io_destroy() gets to cancelling everything that can be cancelled and gets to kiocb_cancel() calling the function driver has left in ->ki_cancel, it becomes vulnerable to a race with IO completion. At that point req is already taken off the list and aio_complete() does *NOT* spin until we (in free_ioctx_users()) releases ->ctx_lock. As the result, it proceeds to kiocb_free(), freing req just it gets passed to ->ki_cancel(). Fix is simple - remove from the list after the call of kiocb_cancel(). All instances of ->ki_cancel() already have to cope with the being called with iocb still on list - that's what happens in io_cancel(2). Cc: stable@kernel.org Fixes: 0460fef2a921 "aio: use cancellation list lazily" Signed-off-by: Al Viro Signed-off-by: Christoph Hellwig --- fs/aio.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index 755d3f57bcc8..1c383bb44b2d 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -639,9 +639,8 @@ static void free_ioctx_users(struct percpu_ref *ref) while (!list_empty(&ctx->active_reqs)) { req = list_first_entry(&ctx->active_reqs, struct aio_kiocb, ki_list); - - list_del_init(&req->ki_list); kiocb_cancel(req); + list_del_init(&req->ki_list); } spin_unlock_irq(&ctx->ctx_lock); -- 2.17.0