io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] iov remapping hardening
@ 2020-09-05 21:45 Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 1/4] io_uring: simplify io_rw_prep_async() Pavel Begunkov
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-09-05 21:45 UTC (permalink / raw)
  To: Jens Axboe, io-uring

This cleans bits around iov remapping. The patches were initially
for-5.9, but if you want it for 5.10 let me know, I'll rebase.

Pavel Begunkov (4):
  io_uring: simplify io_rw_prep_async()
  io_uring: refactor io_req_map_rw()
  io_uring: fix ovelapped memcpy in io_req_map_rw()
  io_uring: kill extra user_bufs check

 fs/io_uring.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] io_uring: simplify io_rw_prep_async()
  2020-09-05 21:45 [PATCH 0/4] iov remapping hardening Pavel Begunkov
@ 2020-09-05 21:45 ` Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 2/4] io_uring: refactor io_req_map_rw() Pavel Begunkov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-09-05 21:45 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Don't touch iter->iov and iov inbetween __io_import_iovec() and
io_req_map_rw(), the former function aleady sets it right. because it
creates one more case with NULL'ed iov to consider in io_req_map_rw().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f703182df2d4..e45572fbb152 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2980,16 +2980,14 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw,
 				   bool force_nonblock)
 {
 	struct io_async_rw *iorw = &req->io->rw;
-	struct iovec *iov;
+	struct iovec *iov = iorw->fast_iov;
 	ssize_t ret;
 
-	iorw->iter.iov = iov = iorw->fast_iov;
 	ret = __io_import_iovec(rw, req, &iov, &iorw->iter, !force_nonblock);
 	if (unlikely(ret < 0))
 		return ret;
 
-	iorw->iter.iov = iov;
-	io_req_map_rw(req, iorw->iter.iov, iorw->fast_iov, &iorw->iter);
+	io_req_map_rw(req, iov, iorw->fast_iov, &iorw->iter);
 	return 0;
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4] io_uring: refactor io_req_map_rw()
  2020-09-05 21:45 [PATCH 0/4] iov remapping hardening Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 1/4] io_uring: simplify io_rw_prep_async() Pavel Begunkov
@ 2020-09-05 21:45 ` Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 3/4] io_uring: fix ovelapped memcpy in io_req_map_rw() Pavel Begunkov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-09-05 21:45 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Set rw->free_iovec to @iovec, that's gives identical result and stresses
that @iovec param rw->free_iovec play the same role.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index e45572fbb152..03808e865dde 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2925,7 +2925,7 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
 	struct io_async_rw *rw = &req->io->rw;
 
 	memcpy(&rw->iter, iter, sizeof(*iter));
-	rw->free_iovec = NULL;
+	rw->free_iovec = iovec;
 	rw->bytes_done = 0;
 	/* can only be fixed buffers, no need to do anything */
 	if (iter->type == ITER_BVEC)
@@ -2942,7 +2942,6 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
 			memcpy(rw->fast_iov + iov_off, fast_iov + iov_off,
 			       sizeof(struct iovec) * iter->nr_segs);
 	} else {
-		rw->free_iovec = iovec;
 		req->flags |= REQ_F_NEED_CLEANUP;
 	}
 }
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4] io_uring: fix ovelapped memcpy in io_req_map_rw()
  2020-09-05 21:45 [PATCH 0/4] iov remapping hardening Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 1/4] io_uring: simplify io_rw_prep_async() Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 2/4] io_uring: refactor io_req_map_rw() Pavel Begunkov
@ 2020-09-05 21:45 ` Pavel Begunkov
  2020-09-05 21:45 ` [PATCH 4/4] io_uring: kill extra user_bufs check Pavel Begunkov
  2020-09-05 21:58 ` [PATCH 0/4] iov remapping hardening Jens Axboe
  4 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-09-05 21:45 UTC (permalink / raw)
  To: Jens Axboe, io-uring

When io_req_map_rw() is called from io_rw_prep_async(), it memcpy()
iorw->iter into itself. Even though it doesn't lead to an error, such a
memcpy()'s aliasing rules violation is considered to be a bad practise.

Inline io_req_map_rw() into io_rw_prep_async(). We don't really need any
remapping there, so it's much simpler than the generic implementation.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 03808e865dde..fc2aaaaca908 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2986,7 +2986,10 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw,
 	if (unlikely(ret < 0))
 		return ret;
 
-	io_req_map_rw(req, iov, iorw->fast_iov, &iorw->iter);
+	iorw->bytes_done = 0;
+	iorw->free_iovec = iov;
+	if (iov)
+		req->flags |= REQ_F_NEED_CLEANUP;
 	return 0;
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4] io_uring: kill extra user_bufs check
  2020-09-05 21:45 [PATCH 0/4] iov remapping hardening Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-09-05 21:45 ` [PATCH 3/4] io_uring: fix ovelapped memcpy in io_req_map_rw() Pavel Begunkov
@ 2020-09-05 21:45 ` Pavel Begunkov
  2020-09-05 21:58 ` [PATCH 0/4] iov remapping hardening Jens Axboe
  4 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-09-05 21:45 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Testing ctx->user_bufs for NULL in io_import_fixed() is not neccessary,
because in that case ctx->nr_user_bufs would be zero, and the following
check would fail.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fc2aaaaca908..2e3adf9d17dd 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2587,18 +2587,12 @@ static ssize_t io_import_fixed(struct io_kiocb *req, int rw,
 	struct io_ring_ctx *ctx = req->ctx;
 	size_t len = req->rw.len;
 	struct io_mapped_ubuf *imu;
-	u16 index, buf_index;
+	u16 index, buf_index = req->buf_index;
 	size_t offset;
 	u64 buf_addr;
 
-	/* attempt to use fixed buffers without having provided iovecs */
-	if (unlikely(!ctx->user_bufs))
-		return -EFAULT;
-
-	buf_index = req->buf_index;
 	if (unlikely(buf_index >= ctx->nr_user_bufs))
 		return -EFAULT;
-
 	index = array_index_nospec(buf_index, ctx->nr_user_bufs);
 	imu = &ctx->user_bufs[index];
 	buf_addr = req->rw.addr;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] iov remapping hardening
  2020-09-05 21:45 [PATCH 0/4] iov remapping hardening Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-09-05 21:45 ` [PATCH 4/4] io_uring: kill extra user_bufs check Pavel Begunkov
@ 2020-09-05 21:58 ` Jens Axboe
  2020-09-05 22:15   ` Pavel Begunkov
  4 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2020-09-05 21:58 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 9/5/20 3:45 PM, Pavel Begunkov wrote:
> This cleans bits around iov remapping. The patches were initially
> for-5.9, but if you want it for 5.10 let me know, I'll rebase.

Yeah let's aim these for 5.10 - look fine to me, but would rather avoid
any extra churn if at all possible.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] iov remapping hardening
  2020-09-05 21:58 ` [PATCH 0/4] iov remapping hardening Jens Axboe
@ 2020-09-05 22:15   ` Pavel Begunkov
  2020-09-05 22:34     ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Pavel Begunkov @ 2020-09-05 22:15 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 06/09/2020 00:58, Jens Axboe wrote:
> On 9/5/20 3:45 PM, Pavel Begunkov wrote:
>> This cleans bits around iov remapping. The patches were initially
>> for-5.9, but if you want it for 5.10 let me know, I'll rebase.
> 
> Yeah let's aim these for 5.10 - look fine to me, but would rather avoid
> any extra churn if at all possible.

These applies cleanly for-5.10/io_uring. It should conflict with
yours req->io shrinking patches, but it seems you haven't queued
them yet.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] iov remapping hardening
  2020-09-05 22:15   ` Pavel Begunkov
@ 2020-09-05 22:34     ` Jens Axboe
  2020-09-05 22:51       ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2020-09-05 22:34 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 9/5/20 4:15 PM, Pavel Begunkov wrote:
> On 06/09/2020 00:58, Jens Axboe wrote:
>> On 9/5/20 3:45 PM, Pavel Begunkov wrote:
>>> This cleans bits around iov remapping. The patches were initially
>>> for-5.9, but if you want it for 5.10 let me know, I'll rebase.
>>
>> Yeah let's aim these for 5.10 - look fine to me, but would rather avoid
>> any extra churn if at all possible.
> 
> These applies cleanly for-5.10/io_uring. It should conflict with

Great, I'll queue them up then.

> yours req->io shrinking patches, but it seems you haven't queued
> them yet.

Which ones?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] iov remapping hardening
  2020-09-05 22:34     ` Jens Axboe
@ 2020-09-05 22:51       ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2020-09-05 22:51 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 9/5/20 4:34 PM, Jens Axboe wrote:
> On 9/5/20 4:15 PM, Pavel Begunkov wrote:
>> On 06/09/2020 00:58, Jens Axboe wrote:
>>> On 9/5/20 3:45 PM, Pavel Begunkov wrote:
>>>> This cleans bits around iov remapping. The patches were initially
>>>> for-5.9, but if you want it for 5.10 let me know, I'll rebase.
>>>
>>> Yeah let's aim these for 5.10 - look fine to me, but would rather avoid
>>> any extra churn if at all possible.
>>
>> These applies cleanly for-5.10/io_uring. It should conflict with
> 
> Great, I'll queue them up then.
> 
>> yours req->io shrinking patches, but it seems you haven't queued
>> them yet.
> 
> Which ones?

Ah yes, totally forgot to queue that... Doesn't matter with the conflicts,
some of the others caused an issue or two as well. I've queued it for
testing.

Thanks for the reminder :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-09-05 22:52 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-05 21:45 [PATCH 0/4] iov remapping hardening Pavel Begunkov
2020-09-05 21:45 ` [PATCH 1/4] io_uring: simplify io_rw_prep_async() Pavel Begunkov
2020-09-05 21:45 ` [PATCH 2/4] io_uring: refactor io_req_map_rw() Pavel Begunkov
2020-09-05 21:45 ` [PATCH 3/4] io_uring: fix ovelapped memcpy in io_req_map_rw() Pavel Begunkov
2020-09-05 21:45 ` [PATCH 4/4] io_uring: kill extra user_bufs check Pavel Begunkov
2020-09-05 21:58 ` [PATCH 0/4] iov remapping hardening Jens Axboe
2020-09-05 22:15   ` Pavel Begunkov
2020-09-05 22:34     ` Jens Axboe
2020-09-05 22:51       ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).