On 01/02/2020 03:21, Jens Axboe wrote: > We punt close to async for the final fput(), but we log the completion > even before that even in that case. We rely on the request not having > a files table assigned to detect what the final async close should do. > However, if we punt the async queue to __io_queue_sqe(), we'll get > ->files assigned and this makes io_close_finish() think it should both > close the filp again (which does no harm) AND log a new CQE event for > this request. This causes duplicate CQEs. > > Queue the request up for async manually so we don't grab files > needlessly and trigger this condition. > Evidently from your 2 last patches, it's becoming hard to track everything in the current state. As mentioned, I'm going to rework and fix submission and prep paths with a bit of formalisation. > Signed-off-by: Jens Axboe > > --- > > diff --git a/fs/io_uring.c b/fs/io_uring.c > index cb3c0a803b46..fb5c5b3e23f4 100644 > --- a/fs/io_uring.c > +++ b/fs/io_uring.c > @@ -2841,16 +2841,13 @@ static void io_close_finish(struct io_wq_work **workptr) > int ret; > > ret = filp_close(req->close.put_file, req->work.files); > - if (ret < 0) { > + if (ret < 0) > req_set_fail_links(req); > - } > io_cqring_add_event(req, ret); > } > > fput(req->close.put_file); > > - /* we bypassed the re-issue, drop the submission reference */ > - io_put_req(req); > io_put_req_find_next(req, &nxt); > if (nxt) > io_wq_assign_next(workptr, nxt); > @@ -2892,7 +2889,13 @@ static int io_close(struct io_kiocb *req, struct io_kiocb **nxt, > > eagain: > req->work.func = io_close_finish; > - return -EAGAIN; > + /* > + * Do manual async queue here to avoid grabbing files - we don't > + * need the files, and it'll cause io_close_finish() to close > + * the file again and cause a double CQE entry for this request > + */ > + io_queue_async_work(req); > + return 0; > } > > static int io_prep_sfr(struct io_kiocb *req, const struct io_uring_sqe *sqe) > -- Pavel Begunkov