All of lore.kernel.org
 help / color / mirror / Atom feed
* io-uring and tcp sockets
@ 2020-11-03  1:56 David Ahern
  2020-11-04 11:21 ` Stefan Metzmacher
  0 siblings, 1 reply; 6+ messages in thread
From: David Ahern @ 2020-11-03  1:56 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Hi:

New to io_uring but can't find this answer online, so reaching out.

I was trying out io_uring with netperf - tcp stream sockets - and
noticed a submission is called complete even with a partial send
(io_send(), ret < sr->len). Saving the offset of what succeeded (plus
some other adjustments) and retrying the sqe again solves the problem.
But the issue seems fundamental so wondering if is intentional?

Thanks,
David

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: io-uring and tcp sockets
  2020-11-03  1:56 io-uring and tcp sockets David Ahern
@ 2020-11-04 11:21 ` Stefan Metzmacher
  2020-11-04 14:50   ` Jens Axboe
  0 siblings, 1 reply; 6+ messages in thread
From: Stefan Metzmacher @ 2020-11-04 11:21 UTC (permalink / raw)
  To: David Ahern, Jens Axboe, io-uring


[-- Attachment #1.1: Type: text/plain, Size: 1107 bytes --]

Hi David,

> New to io_uring but can't find this answer online, so reaching out.
> 
> I was trying out io_uring with netperf - tcp stream sockets - and
> noticed a submission is called complete even with a partial send
> (io_send(), ret < sr->len). Saving the offset of what succeeded (plus
> some other adjustments) and retrying the sqe again solves the problem.
> But the issue seems fundamental so wondering if is intentional?

I guess this is just the way it is currently.

For Samba I'd also like to be sure to never get short write to a socket.

There I'd like to keep the pipeline full by submitting as much sqe's as possible
(without waiting for completions on every single IORING_OP_SENDMSG/IORING_OP_SPLICE)
using IOSQE_IO_DRAIN or IOSQE_IO_LINK and maybe IOSQE_ASYNC or IORING_SETUP_SQPOLL.

But for now I just used a single sqe with IOSQE_ASYNC at a time.

Jens, do you see a way to overcome that limitation?

As far as I understand the situation is completely fixed now and
it's no possible to get short reads and writes for file io anymore, is that correct?

metze


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: io-uring and tcp sockets
  2020-11-04 11:21 ` Stefan Metzmacher
@ 2020-11-04 14:50   ` Jens Axboe
  2020-11-04 15:38     ` David Ahern
  0 siblings, 1 reply; 6+ messages in thread
From: Jens Axboe @ 2020-11-04 14:50 UTC (permalink / raw)
  To: Stefan Metzmacher, David Ahern, io-uring

On 11/4/20 4:21 AM, Stefan Metzmacher wrote:
> Hi David,
> 
>> New to io_uring but can't find this answer online, so reaching out.
>>
>> I was trying out io_uring with netperf - tcp stream sockets - and
>> noticed a submission is called complete even with a partial send
>> (io_send(), ret < sr->len). Saving the offset of what succeeded (plus
>> some other adjustments) and retrying the sqe again solves the problem.
>> But the issue seems fundamental so wondering if is intentional?
> 
> I guess this is just the way it is currently.
> 
> For Samba I'd also like to be sure to never get short write to a socket.
> 
> There I'd like to keep the pipeline full by submitting as much sqe's as possible
> (without waiting for completions on every single IORING_OP_SENDMSG/IORING_OP_SPLICE)
> using IOSQE_IO_DRAIN or IOSQE_IO_LINK and maybe IOSQE_ASYNC or IORING_SETUP_SQPOLL.
> 
> But for now I just used a single sqe with IOSQE_ASYNC at a time.
> 
> Jens, do you see a way to overcome that limitation?
> 
> As far as I understand the situation is completely fixed now and
> it's no possible to get short reads and writes for file io anymore, is that correct?

Right, the regular file IO will not return short reads or writes, unless a
blocking attempt returns 0 (or short). Which would be expected. The send/recvmsg
side just returns what the socket read/write would return, similarly to if you
did the normal system call variants of those calls.

It would not be impossible to make recvmsg/sendmsg handle this internally as
well, we just need a good way to indicate the intent of "please satisfy the
whole thing before return".

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: io-uring and tcp sockets
  2020-11-04 14:50   ` Jens Axboe
@ 2020-11-04 15:38     ` David Ahern
  2020-11-04 17:08       ` Stefan Metzmacher
  0 siblings, 1 reply; 6+ messages in thread
From: David Ahern @ 2020-11-04 15:38 UTC (permalink / raw)
  To: Jens Axboe, Stefan Metzmacher, io-uring

[-- Attachment #1: Type: text/plain, Size: 1904 bytes --]

On 11/4/20 7:50 AM, Jens Axboe wrote:
> On 11/4/20 4:21 AM, Stefan Metzmacher wrote:
>> Hi David,
>>
>>> New to io_uring but can't find this answer online, so reaching out.
>>>
>>> I was trying out io_uring with netperf - tcp stream sockets - and
>>> noticed a submission is called complete even with a partial send
>>> (io_send(), ret < sr->len). Saving the offset of what succeeded (plus
>>> some other adjustments) and retrying the sqe again solves the problem.
>>> But the issue seems fundamental so wondering if is intentional?
>>
>> I guess this is just the way it is currently.
>>
>> For Samba I'd also like to be sure to never get short write to a socket.
>>
>> There I'd like to keep the pipeline full by submitting as much sqe's as possible
>> (without waiting for completions on every single IORING_OP_SENDMSG/IORING_OP_SPLICE)
>> using IOSQE_IO_DRAIN or IOSQE_IO_LINK and maybe IOSQE_ASYNC or IORING_SETUP_SQPOLL.
>>
>> But for now I just used a single sqe with IOSQE_ASYNC at a time.
>>
>> Jens, do you see a way to overcome that limitation?
>>
>> As far as I understand the situation is completely fixed now and
>> it's no possible to get short reads and writes for file io anymore, is that correct?
> 
> Right, the regular file IO will not return short reads or writes, unless a
> blocking attempt returns 0 (or short). Which would be expected. The send/recvmsg
> side just returns what the socket read/write would return, similarly to if you
> did the normal system call variants of those calls.
> 
> It would not be impossible to make recvmsg/sendmsg handle this internally as
> well, we just need a good way to indicate the intent of "please satisfy the
> whole thing before return".
> 

Attached patch handles the full send request; sendmsg can be handled
similarly.

I take your comment to mean there should be an sq flag to opt-in to the
behavior change? Pointers to which flag set?

[-- Attachment #2: 0001-io_uring-Handle-incomplete-sends-for-stream-sockets.patch --]
[-- Type: text/plain, Size: 1846 bytes --]

From 9d6ca280512d3a539c771879d82645a0f7b5a27d Mon Sep 17 00:00:00 2001
From: David Ahern <dsahern@gmail.com>
Date: Mon, 2 Nov 2020 18:31:00 -0700
Subject: [PATCH] io_uring: Handle incomplete sends for stream sockets

Signed-off-by: David Ahern <dsahern@gmail.com>
---
 fs/io_uring.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index aae0ef2ec34d..d15511d1e284 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -421,6 +421,7 @@ struct io_sr_msg {
 	int				msg_flags;
 	int				bgid;
 	size_t				len;
+	size_t				offset;
 	struct io_buffer		*kbuf;
 };
 
@@ -4149,7 +4150,8 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
 	if (unlikely(!sock))
 		return ret;
 
-	ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter);
+	ret = import_single_range(WRITE, sr->buf + sr->offset, sr->len, &iov,
+				  &msg.msg_iter);
 	if (unlikely(ret))
 		return ret;;
 
@@ -4171,8 +4173,18 @@ static int io_send(struct io_kiocb *req, bool force_nonblock,
 	if (ret == -ERESTARTSYS)
 		ret = -EINTR;
 
-	if (ret < 0)
+	if (ret < 0) {
 		req_set_fail_links(req);
+	} else if (ret > 0 && sock->type == SOCK_STREAM) {
+		if (unlikely(ret < sr->len)) {
+			pr_debug("req %px sr->offset %lu sr->len %lu ret %d\n",
+				 req, sr->offset, sr->len, ret);
+			sr->len -= ret;
+			sr->offset += ret;
+			return -EAGAIN;
+		}
+		ret += sr->offset;
+	}
 	__io_req_complete(req, ret, 0, cs);
 	return 0;
 }
@@ -6460,6 +6472,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	/* same numerical values with corresponding REQ_F_*, safe to copy */
 	req->flags |= sqe_flags;
 
+	if (req->opcode == IORING_OP_SEND || req->opcode == IORING_OP_SENDMSG)
+		req->sr_msg.offset = 0;
+
 	if (!io_op_defs[req->opcode].needs_file)
 		return 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: io-uring and tcp sockets
  2020-11-04 15:38     ` David Ahern
@ 2020-11-04 17:08       ` Stefan Metzmacher
  2020-11-08 23:18         ` David Ahern
  0 siblings, 1 reply; 6+ messages in thread
From: Stefan Metzmacher @ 2020-11-04 17:08 UTC (permalink / raw)
  To: David Ahern, Jens Axboe, io-uring


[-- Attachment #1.1: Type: text/plain, Size: 2311 bytes --]

Am 04.11.20 um 16:38 schrieb David Ahern:
> On 11/4/20 7:50 AM, Jens Axboe wrote:
>> On 11/4/20 4:21 AM, Stefan Metzmacher wrote:
>>> Hi David,
>>>
>>>> New to io_uring but can't find this answer online, so reaching out.
>>>>
>>>> I was trying out io_uring with netperf - tcp stream sockets - and
>>>> noticed a submission is called complete even with a partial send
>>>> (io_send(), ret < sr->len). Saving the offset of what succeeded (plus
>>>> some other adjustments) and retrying the sqe again solves the problem.
>>>> But the issue seems fundamental so wondering if is intentional?
>>>
>>> I guess this is just the way it is currently.
>>>
>>> For Samba I'd also like to be sure to never get short write to a socket.
>>>
>>> There I'd like to keep the pipeline full by submitting as much sqe's as possible
>>> (without waiting for completions on every single IORING_OP_SENDMSG/IORING_OP_SPLICE)
>>> using IOSQE_IO_DRAIN or IOSQE_IO_LINK and maybe IOSQE_ASYNC or IORING_SETUP_SQPOLL.
>>>
>>> But for now I just used a single sqe with IOSQE_ASYNC at a time.
>>>
>>> Jens, do you see a way to overcome that limitation?
>>>
>>> As far as I understand the situation is completely fixed now and
>>> it's no possible to get short reads and writes for file io anymore, is that correct?
>>
>> Right, the regular file IO will not return short reads or writes, unless a
>> blocking attempt returns 0 (or short). Which would be expected. The send/recvmsg
>> side just returns what the socket read/write would return, similarly to if you
>> did the normal system call variants of those calls.
>>
>> It would not be impossible to make recvmsg/sendmsg handle this internally as
>> well, we just need a good way to indicate the intent of "please satisfy the
>> whole thing before return".
>>
> 
> Attached patch handles the full send request; sendmsg can be handled
> similarly.
> 
> I take your comment to mean there should be an sq flag to opt-in to the
> behavior change? Pointers to which flag set?

sendmsg has msg_control, I think we'll need more interaction with the socket layer here
in order to wait in a single low level ->sendmsg_locked() call.

I know IORING_OP_SENDMSG doesn't support msg_control currently, but I hope to get that fixed in future.

metze



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: io-uring and tcp sockets
  2020-11-04 17:08       ` Stefan Metzmacher
@ 2020-11-08 23:18         ` David Ahern
  0 siblings, 0 replies; 6+ messages in thread
From: David Ahern @ 2020-11-08 23:18 UTC (permalink / raw)
  To: Stefan Metzmacher, Jens Axboe, io-uring

On 11/4/20 10:08 AM, Stefan Metzmacher wrote:
> sendmsg has msg_control, I think we'll need more interaction with the socket layer here
> in order to wait in a single low level ->sendmsg_locked() call.
> 
> I know IORING_OP_SENDMSG doesn't support msg_control currently, but I hope to get that fixed in future.

That does not work. __io_queue_sqe calls io_issue_sqe with the
force_nonblock flag set. io_send and io_sendmsg respond to that flag by
setting MSG_DONTWAIT in the respective socket call. Hence, my question
about the short send being by design.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-11-08 23:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-03  1:56 io-uring and tcp sockets David Ahern
2020-11-04 11:21 ` Stefan Metzmacher
2020-11-04 14:50   ` Jens Axboe
2020-11-04 15:38     ` David Ahern
2020-11-04 17:08       ` Stefan Metzmacher
2020-11-08 23:18         ` David Ahern

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.