From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42680C433E0 for ; Thu, 18 Jun 2020 14:44:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EE09E20888 for ; Thu, 18 Jun 2020 14:44:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="veVE78EJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE09E20888 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D7C88D001B; Thu, 18 Jun 2020 10:44:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7888F8D0018; Thu, 18 Jun 2020 10:44:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 607288D001D; Thu, 18 Jun 2020 10:44:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 30CE68D001B for ; Thu, 18 Jun 2020 10:44:07 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DF8F88248047 for ; Thu, 18 Jun 2020 14:44:06 +0000 (UTC) X-FDA: 76942602492.28.wing00_1c03db226e11 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id B88B09F874 for ; Thu, 18 Jun 2020 14:44:06 +0000 (UTC) X-HE-Tag: wing00_1c03db226e11 X-Filterd-Recvd-Size: 10157 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Thu, 18 Jun 2020 14:44:06 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id 64so2869598pfv.11 for ; Thu, 18 Jun 2020 07:44:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=tVJM5ezTdcIMG+aKDW1osETe1l5d87INy1YG0FUdQcM=; b=veVE78EJYXVACUi1db6AgzNJIzh1hHgYOEDwJ41A29iyHkJ0tRhmtt2MLXV57P+t9T huAegqWFdBcpK4nimSirWfSC0b9sPD6lnc/YrwOmnptgxWF7i9cd+CEONozpAruie+nd wPqh4hcqGMRiDgDp72UNJ9uJRJ3Tm6xYgQAXv7jwxFhQBMol00k7MbnVC0E69rPw6FIM 7UKG7C5tnxjAntploc2dKWqJuB+isxJpInUjNRMdqtbp8QrgmRcJRGjgnIdC2OqwERSf xgxD3JLAUc+AU3m+cc8OrVO8os02InMtoRgHoCVgMKC/eW5Z+iV6e3xMpQFkEs3n9M84 vhKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to:mime-version:content-transfer-encoding; bh=tVJM5ezTdcIMG+aKDW1osETe1l5d87INy1YG0FUdQcM=; b=V6SkJd0srkIN365R2b375g9h40828T311dvwg85romOPXDfc2dnkJQyn/oR6mNmZsC 6BTdZKKrERgQKhvWH+Ro2fVSiTUwhlZ2kO2O0T4D9WS9dErim87XssnU+CebzGcvzJto PlbC708K12o4xq4Wq7sMsIvPga4VklQ7/Lw10eCQOrvlANRJkeihzxsBm04oam3uSBgw sB4s9LaqKg4SSdtbYfkEVOdDrHiANGfmWyj0Ngu07AzAooxm3W0Bq22bzV8uqSiDoSY6 +BQGcTreMwx7OmaESnkkHQwHeRPJnXEyzpWYzACyhHZIfiBKKVxlZJ2YZjtydwN35gWf YolA== X-Gm-Message-State: AOAM530ZNX6x3RKpvy5cJWrbMT3x8ingb0HfyR8fEIJVCmP5feojZflf 15EUh9A8vqK9yuBITdWgrufL8Q== X-Google-Smtp-Source: ABdhPJybozkuJOyPuS+aF/bntPw1nY8EdCmOZ0hRiMVA9zO7UZT6xbjz8eMsCaZwMXp2bvsabGcJ4A== X-Received: by 2002:aa7:9537:: with SMTP id c23mr3848900pfp.149.1592491444963; Thu, 18 Jun 2020 07:44:04 -0700 (PDT) Received: from x1.localdomain ([65.144.74.34]) by smtp.gmail.com with ESMTPSA id g9sm3127197pfm.151.2020.06.18.07.44.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jun 2020 07:44:04 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, Jens Axboe Subject: [PATCH 04/15] io_uring: re-issue block requests that failed because of resources Date: Thu, 18 Jun 2020 08:43:44 -0600 Message-Id: <20200618144355.17324-5-axboe@kernel.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618144355.17324-1-axboe@kernel.dk> References: <20200618144355.17324-1-axboe@kernel.dk> Reply-To: "[PATCHSET v7 0/15]"@smtpin28.hostedemail.com, Add@smtpin28.hostedemail.com, support@smtpin28.hostedemail.com, for@smtpin28.hostedemail.com, async@smtpin28.hostedemail.com, buffered@smtpin28.hostedemail.com, reads@smtpin28.hostedemail.com MIME-Version: 1.0 X-Rspamd-Queue-Id: B88B09F874 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mark the plug with nowait =3D=3D true, which will cause requests to avoid blocking on request allocation. If they do, we catch them and reissue them from a task_work based handler. Normally we can catch -EAGAIN directly, but the hard case is for split requests. As an example, the application issues a 512KB request. The block core will split this into 128KB if that's the max size for the device. The first request issues just fine, but we run into -EAGAIN for some latter splits for the same request. As the bio is split, we don't get to see the -EAGAIN until one of the actual reads complete, and hence we cannot handle it inline as part of submission. This does potentially cause re-reads of parts of the range, as the whole request is reissued. There's currently no better way to handle this. Signed-off-by: Jens Axboe --- fs/io_uring.c | 148 ++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 124 insertions(+), 24 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 2e257c5a1866..40413fb9d07b 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -900,6 +900,13 @@ static int io_file_get(struct io_submit_state *state= , struct io_kiocb *req, static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe); =20 +static ssize_t io_import_iovec(int rw, struct io_kiocb *req, + struct iovec **iovec, struct iov_iter *iter, + bool needs_lock); +static int io_setup_async_rw(struct io_kiocb *req, ssize_t io_size, + struct iovec *iovec, struct iovec *fast_iov, + struct iov_iter *iter); + static struct kmem_cache *req_cachep; =20 static const struct file_operations io_uring_fops; @@ -1978,12 +1985,115 @@ static void io_complete_rw_common(struct kiocb *= kiocb, long res) __io_cqring_add_event(req, res, cflags); } =20 +static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx) +{ + struct mm_struct *mm =3D current->mm; + + if (mm) { + kthread_unuse_mm(mm); + mmput(mm); + } +} + +static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx, + struct io_kiocb *req) +{ + if (io_op_defs[req->opcode].needs_mm && !current->mm) { + if (unlikely(!mmget_not_zero(ctx->sqo_mm))) + return -EFAULT; + kthread_use_mm(ctx->sqo_mm); + } + + return 0; +} + +#ifdef CONFIG_BLOCK +static bool io_resubmit_prep(struct io_kiocb *req, int error) +{ + struct iovec inline_vecs[UIO_FASTIOV], *iovec =3D inline_vecs; + ssize_t ret =3D -ECANCELED; + struct iov_iter iter; + int rw; + + if (error) { + ret =3D error; + goto end_req; + } + + switch (req->opcode) { + case IORING_OP_READV: + case IORING_OP_READ_FIXED: + case IORING_OP_READ: + rw =3D READ; + break; + case IORING_OP_WRITEV: + case IORING_OP_WRITE_FIXED: + case IORING_OP_WRITE: + rw =3D WRITE; + break; + default: + printk_once(KERN_WARNING "io_uring: bad opcode in resubmit %d\n", + req->opcode); + goto end_req; + } + + ret =3D io_import_iovec(rw, req, &iovec, &iter, false); + if (ret < 0) + goto end_req; + ret =3D io_setup_async_rw(req, ret, iovec, inline_vecs, &iter); + if (!ret) + return true; + kfree(iovec); +end_req: + io_cqring_add_event(req, ret); + req_set_fail_links(req); + io_put_req(req); + return false; +} + +static void io_rw_resubmit(struct callback_head *cb) +{ + struct io_kiocb *req =3D container_of(cb, struct io_kiocb, task_work); + struct io_ring_ctx *ctx =3D req->ctx; + int err; + + __set_current_state(TASK_RUNNING); + + err =3D io_sq_thread_acquire_mm(ctx, req); + + if (io_resubmit_prep(req, err)) { + refcount_inc(&req->refs); + io_queue_async_work(req); + } +} +#endif + +static bool io_rw_reissue(struct io_kiocb *req, long res) +{ +#ifdef CONFIG_BLOCK + struct task_struct *tsk; + int ret; + + if ((res !=3D -EAGAIN && res !=3D -EOPNOTSUPP) || io_wq_current_is_work= er()) + return false; + + tsk =3D req->task; + init_task_work(&req->task_work, io_rw_resubmit); + ret =3D task_work_add(tsk, &req->task_work, true); + if (!ret) + return true; +#endif + return false; +} + static void io_complete_rw(struct kiocb *kiocb, long res, long res2) { struct io_kiocb *req =3D container_of(kiocb, struct io_kiocb, rw.kiocb)= ; =20 - io_complete_rw_common(kiocb, res); - io_put_req(req); + if (!io_rw_reissue(req, res)) { + io_complete_rw_common(kiocb, res); + io_put_req(req); + } } =20 static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long re= s2) @@ -2169,6 +2279,9 @@ static int io_prep_rw(struct io_kiocb *req, const s= truct io_uring_sqe *sqe, if (kiocb->ki_flags & IOCB_NOWAIT) req->flags |=3D REQ_F_NOWAIT; =20 + if (kiocb->ki_flags & IOCB_DIRECT) + io_get_req_task(req); + if (force_nonblock) kiocb->ki_flags |=3D IOCB_NOWAIT; =20 @@ -2668,6 +2781,7 @@ static int io_read(struct io_kiocb *req, bool force= _nonblock) iov_count =3D iov_iter_count(&iter); ret =3D rw_verify_area(READ, req->file, &kiocb->ki_pos, iov_count); if (!ret) { + unsigned long nr_segs =3D iter.nr_segs; ssize_t ret2 =3D 0; =20 if (req->file->f_op->read_iter) @@ -2679,6 +2793,8 @@ static int io_read(struct io_kiocb *req, bool force= _nonblock) if (!force_nonblock || (ret2 !=3D -EAGAIN && ret2 !=3D -EIO)) { kiocb_done(kiocb, ret2); } else { + iter.count =3D iov_count; + iter.nr_segs =3D nr_segs; copy_iov: ret =3D io_setup_async_rw(req, io_size, iovec, inline_vecs, &iter); @@ -2765,6 +2881,7 @@ static int io_write(struct io_kiocb *req, bool forc= e_nonblock) iov_count =3D iov_iter_count(&iter); ret =3D rw_verify_area(WRITE, req->file, &kiocb->ki_pos, iov_count); if (!ret) { + unsigned long nr_segs =3D iter.nr_segs; ssize_t ret2; =20 /* @@ -2802,6 +2919,8 @@ static int io_write(struct io_kiocb *req, bool forc= e_nonblock) if (!force_nonblock || ret2 !=3D -EAGAIN) { kiocb_done(kiocb, ret2); } else { + iter.count =3D iov_count; + iter.nr_segs =3D nr_segs; copy_iov: ret =3D io_setup_async_rw(req, io_size, iovec, inline_vecs, &iter); @@ -4282,28 +4401,6 @@ static void io_async_queue_proc(struct file *file,= struct wait_queue_head *head, __io_queue_proc(&pt->req->apoll->poll, pt, head); } =20 -static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx) -{ - struct mm_struct *mm =3D current->mm; - - if (mm) { - kthread_unuse_mm(mm); - mmput(mm); - } -} - -static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx, - struct io_kiocb *req) -{ - if (io_op_defs[req->opcode].needs_mm && !current->mm) { - if (unlikely(!mmget_not_zero(ctx->sqo_mm))) - return -EFAULT; - kthread_use_mm(ctx->sqo_mm); - } - - return 0; -} - static void io_async_task_func(struct callback_head *cb) { struct io_kiocb *req =3D container_of(cb, struct io_kiocb, task_work); @@ -5814,6 +5911,9 @@ static void io_submit_state_start(struct io_submit_= state *state, unsigned int max_ios) { blk_start_plug(&state->plug); +#ifdef CONFIG_BLOCK + state->plug.nowait =3D true; +#endif state->free_reqs =3D 0; state->file =3D NULL; state->ios_left =3D max_ios; --=20 2.27.0