All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] timeout fixes
@ 2020-04-30 19:31 Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 1/5] io_uring: check non-sync defer_list carefully Pavel Begunkov
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-04-30 19:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

[1,2] are small random patches.
[3,4] are the last 2 timeout patches, but with 1 var renamed.
[5] fixes a timeout problem related to batched CQ commits. From
what I see, this should be the last fixing timeouts.

Pavel Begunkov (5):
  io_uring: check non-sync defer_list carefully
  io_uring: pass nxt from sync_file_range()
  io_uring: trigger timeout after any sqe->off CQEs
  io_uring: don't trigger timeout with another t-out
  io_uring: fix timeout offset with batch CQ commit

 fs/io_uring.c | 130 +++++++++++++++++++++-----------------------------
 1 file changed, 54 insertions(+), 76 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/5] io_uring: check non-sync defer_list carefully
  2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
@ 2020-04-30 19:31 ` Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 2/5] io_uring: pass nxt from sync_file_range() Pavel Begunkov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-04-30 19:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

io_req_defer() do double-checked locking. Use proper helpers for that,
i.e. list_empty_careful().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8ee7b4f72b8f..6b4d3d8a6941 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4974,7 +4974,7 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	int ret;
 
 	/* Still need defer if there is pending req in defer list. */
-	if (!req_need_defer(req) && list_empty(&ctx->defer_list))
+	if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))
 		return 0;
 
 	if (!req->io && io_alloc_async_ctx(req))
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/5] io_uring: pass nxt from sync_file_range()
  2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 1/5] io_uring: check non-sync defer_list carefully Pavel Begunkov
@ 2020-04-30 19:31 ` Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 3/5] io_uring: trigger timeout after any sqe->off CQEs Pavel Begunkov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-04-30 19:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

Make io_sync_file_range_finish() use io_steal_work()
to pass its nxt work.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6b4d3d8a6941..8fff427345d5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3502,7 +3502,7 @@ static void io_sync_file_range_finish(struct io_wq_work **workptr)
 	if (io_req_cancelled(req))
 		return;
 	__io_sync_file_range(req);
-	io_put_req(req); /* put submission ref */
+	io_steal_work(req, workptr);
 }
 
 static int io_sync_file_range(struct io_kiocb *req, bool force_nonblock)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/5] io_uring: trigger timeout after any sqe->off CQEs
  2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 1/5] io_uring: check non-sync defer_list carefully Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 2/5] io_uring: pass nxt from sync_file_range() Pavel Begunkov
@ 2020-04-30 19:31 ` Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 4/5] io_uring: don't trigger timeout with another t-out Pavel Begunkov
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-04-30 19:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

sequence mode timeouts wait not for sqe->off CQEs, but rather
sqe->off + number of prior inflight requests with a quirk ignoring other
timeouts completions. Wait exactly for sqe->off using completion count
(tail) for accounting.

Reported-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 120 +++++++++++++++++++-------------------------------
 1 file changed, 46 insertions(+), 74 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8fff427345d5..006ac57af842 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -384,7 +384,8 @@ struct io_timeout {
 	struct file			*file;
 	u64				addr;
 	int				flags;
-	u32				count;
+	u32				off;
+	u32				target_seq;
 };
 
 struct io_rw {
@@ -982,23 +983,6 @@ static struct io_kiocb *io_get_deferred_req(struct io_ring_ctx *ctx)
 	return NULL;
 }
 
-static struct io_kiocb *io_get_timeout_req(struct io_ring_ctx *ctx)
-{
-	struct io_kiocb *req;
-
-	req = list_first_entry_or_null(&ctx->timeout_list, struct io_kiocb, list);
-	if (req) {
-		if (req->flags & REQ_F_TIMEOUT_NOSEQ)
-			return NULL;
-		if (!__req_need_defer(req)) {
-			list_del_init(&req->list);
-			return req;
-		}
-	}
-
-	return NULL;
-}
-
 static void __io_commit_cqring(struct io_ring_ctx *ctx)
 {
 	struct io_rings *rings = ctx->rings;
@@ -1114,12 +1098,42 @@ static void io_kill_timeouts(struct io_ring_ctx *ctx)
 	spin_unlock_irq(&ctx->completion_lock);
 }
 
+static inline bool io_check_in_range(u32 pos, u32 start, u32 end)
+{
+	/* if @end < @start, check for [end, MAX_UINT] + [MAX_UINT, start] */
+	return (pos - start) <= (end - start);
+}
+
+static void __io_flush_timeouts(struct io_ring_ctx *ctx)
+{
+	u32 end, start;
+
+	start = end = ctx->cached_cq_tail;
+	do {
+		struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
+							struct io_kiocb, list);
+
+		if (req->flags & REQ_F_TIMEOUT_NOSEQ)
+			break;
+		/*
+		 * multiple timeouts may have the same target,
+		 * check that @req is in [first_tail, cur_tail]
+		 */
+		if (!io_check_in_range(req->timeout.target_seq, start, end))
+			break;
+
+		list_del_init(&req->list);
+		io_kill_timeout(req);
+		end = ctx->cached_cq_tail;
+	} while (!list_empty(&ctx->timeout_list));
+}
+
 static void io_commit_cqring(struct io_ring_ctx *ctx)
 {
 	struct io_kiocb *req;
 
-	while ((req = io_get_timeout_req(ctx)) != NULL)
-		io_kill_timeout(req);
+	if (!list_empty(&ctx->timeout_list))
+		__io_flush_timeouts(ctx);
 
 	__io_commit_cqring(ctx);
 
@@ -4540,20 +4554,8 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
 	 * We could be racing with timeout deletion. If the list is empty,
 	 * then timeout lookup already found it and will be handling it.
 	 */
-	if (!list_empty(&req->list)) {
-		struct io_kiocb *prev;
-
-		/*
-		 * Adjust the reqs sequence before the current one because it
-		 * will consume a slot in the cq_ring and the cq_tail
-		 * pointer will be increased, otherwise other timeout reqs may
-		 * return in advance without waiting for enough wait_nr.
-		 */
-		prev = req;
-		list_for_each_entry_continue_reverse(prev, &ctx->timeout_list, list)
-			prev->sequence++;
+	if (!list_empty(&req->list))
 		list_del_init(&req->list);
-	}
 
 	io_cqring_fill_event(req, -ETIME);
 	io_commit_cqring(ctx);
@@ -4633,18 +4635,19 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 {
 	struct io_timeout_data *data;
 	unsigned flags;
+	u32 off = READ_ONCE(sqe->off);
 
 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
 		return -EINVAL;
 	if (sqe->ioprio || sqe->buf_index || sqe->len != 1)
 		return -EINVAL;
-	if (sqe->off && is_timeout_link)
+	if (off && is_timeout_link)
 		return -EINVAL;
 	flags = READ_ONCE(sqe->timeout_flags);
 	if (flags & ~IORING_TIMEOUT_ABS)
 		return -EINVAL;
 
-	req->timeout.count = READ_ONCE(sqe->off);
+	req->timeout.off = off;
 
 	if (!req->io && io_alloc_async_ctx(req))
 		return -ENOMEM;
@@ -4668,68 +4671,37 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 static int io_timeout(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
-	struct io_timeout_data *data;
+	struct io_timeout_data *data = &req->io->timeout;
 	struct list_head *entry;
-	unsigned span = 0;
-	u32 count = req->timeout.count;
-	u32 seq = req->sequence;
+	u32 tail, off = req->timeout.off;
 
-	data = &req->io->timeout;
+	spin_lock_irq(&ctx->completion_lock);
 
 	/*
 	 * sqe->off holds how many events that need to occur for this
 	 * timeout event to be satisfied. If it isn't set, then this is
 	 * a pure timeout request, sequence isn't used.
 	 */
-	if (!count) {
+	if (!off) {
 		req->flags |= REQ_F_TIMEOUT_NOSEQ;
-		spin_lock_irq(&ctx->completion_lock);
 		entry = ctx->timeout_list.prev;
 		goto add;
 	}
 
-	req->sequence = seq + count;
+	tail = ctx->cached_cq_tail;
+	req->timeout.target_seq = tail + off;
 
 	/*
 	 * Insertion sort, ensuring the first entry in the list is always
 	 * the one we need first.
 	 */
-	spin_lock_irq(&ctx->completion_lock);
 	list_for_each_prev(entry, &ctx->timeout_list) {
 		struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, list);
-		unsigned nxt_seq;
-		long long tmp, tmp_nxt;
-		u32 nxt_offset = nxt->timeout.count;
-
-		if (nxt->flags & REQ_F_TIMEOUT_NOSEQ)
-			continue;
-
-		/*
-		 * Since seq + count can overflow, use type long
-		 * long to store it.
-		 */
-		tmp = (long long)seq + count;
-		nxt_seq = nxt->sequence - nxt_offset;
-		tmp_nxt = (long long)nxt_seq + nxt_offset;
+		u32 nxt_off = nxt->timeout.target_seq - tail;
 
-		/*
-		 * cached_sq_head may overflow, and it will never overflow twice
-		 * once there is some timeout req still be valid.
-		 */
-		if (seq < nxt_seq)
-			tmp += UINT_MAX;
-
-		if (tmp > tmp_nxt)
+		if (!(nxt->flags & REQ_F_TIMEOUT_NOSEQ) && (off >= nxt_off))
 			break;
-
-		/*
-		 * Sequence of reqs after the insert one and itself should
-		 * be adjusted because each timeout req consumes a slot.
-		 */
-		span++;
-		nxt->sequence++;
 	}
-	req->sequence -= span;
 add:
 	list_add(&req->list, entry);
 	data->timer.function = io_timeout_fn;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/5] io_uring: don't trigger timeout with another t-out
  2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-04-30 19:31 ` [PATCH 3/5] io_uring: trigger timeout after any sqe->off CQEs Pavel Begunkov
@ 2020-04-30 19:31 ` Pavel Begunkov
  2020-04-30 19:31 ` [PATCH 5/5] io_uring: fix timeout offset with batch CQ commit Pavel Begunkov
  2020-05-01  8:21 ` [PATCH 0/5] timeout fixes Pavel Begunkov
  5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-04-30 19:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

When deciding whether to fire a timeout basing on number of completions,
ignore CQEs emitted by other timeouts.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 19 +++----------------
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 006ac57af842..fb8ec4b00375 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1098,33 +1098,20 @@ static void io_kill_timeouts(struct io_ring_ctx *ctx)
 	spin_unlock_irq(&ctx->completion_lock);
 }
 
-static inline bool io_check_in_range(u32 pos, u32 start, u32 end)
-{
-	/* if @end < @start, check for [end, MAX_UINT] + [MAX_UINT, start] */
-	return (pos - start) <= (end - start);
-}
-
 static void __io_flush_timeouts(struct io_ring_ctx *ctx)
 {
-	u32 end, start;
-
-	start = end = ctx->cached_cq_tail;
 	do {
 		struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
 							struct io_kiocb, list);
 
 		if (req->flags & REQ_F_TIMEOUT_NOSEQ)
 			break;
-		/*
-		 * multiple timeouts may have the same target,
-		 * check that @req is in [first_tail, cur_tail]
-		 */
-		if (!io_check_in_range(req->timeout.target_seq, start, end))
+		if (req->timeout.target_seq != ctx->cached_cq_tail
+					- atomic_read(&ctx->cq_timeouts))
 			break;
 
 		list_del_init(&req->list);
 		io_kill_timeout(req);
-		end = ctx->cached_cq_tail;
 	} while (!list_empty(&ctx->timeout_list));
 }
 
@@ -4688,7 +4675,7 @@ static int io_timeout(struct io_kiocb *req)
 		goto add;
 	}
 
-	tail = ctx->cached_cq_tail;
+	tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
 	req->timeout.target_seq = tail + off;
 
 	/*
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/5] io_uring: fix timeout offset with batch CQ commit
  2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-04-30 19:31 ` [PATCH 4/5] io_uring: don't trigger timeout with another t-out Pavel Begunkov
@ 2020-04-30 19:31 ` Pavel Begunkov
  2020-05-01  8:21 ` [PATCH 0/5] timeout fixes Pavel Begunkov
  5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-04-30 19:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

Completions may be done in batches, where io_commit_cqring() is called
only once at the end. It means, that timeout sequence checks are done
only once and don't consider events in between, so potentially failing
to trigger some timeouts.

Do a separate CQ sequence accounting in u64. On timeout sequence
checking look up to UINT_MAX sequence behind, which it could have
missed. It's safe to do, because sqe->off is u32 and so can't wrap
around to used [seq - UINT_MAX, seq] window.

It's also necessary to decouple CQ timeout sequences from
ctx->cq_cached_tail for implementing "single CQE per link" feature and
others.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fb8ec4b00375..f09c1d3a7e63 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -298,6 +298,7 @@ struct io_ring_ctx {
 		unsigned		cq_entries;
 		unsigned		cq_mask;
 		atomic_t		cq_timeouts;
+		u64			cq_seq;
 		unsigned long		cq_check_overflow;
 		struct wait_queue_head	cq_wait;
 		struct fasync_struct	*cq_fasync;
@@ -385,7 +386,7 @@ struct io_timeout {
 	u64				addr;
 	int				flags;
 	u32				off;
-	u32				target_seq;
+	u64				target_seq;
 };
 
 struct io_rw {
@@ -1081,6 +1082,7 @@ static void io_kill_timeout(struct io_kiocb *req)
 	ret = hrtimer_try_to_cancel(&req->io->timeout.timer);
 	if (ret != -1) {
 		atomic_inc(&req->ctx->cq_timeouts);
+		req->ctx->cq_seq--;
 		list_del_init(&req->list);
 		req->flags |= REQ_F_COMP_LOCKED;
 		io_cqring_fill_event(req, 0);
@@ -1098,16 +1100,31 @@ static void io_kill_timeouts(struct io_ring_ctx *ctx)
 	spin_unlock_irq(&ctx->completion_lock);
 }
 
+static inline bool io_check_in_range(u64 pos, u64 start, u64 end)
+{
+	/* if @end < @start, check for [end, MAX_U64] + [MAX_U64, start] */
+	return (pos - start) <= (end - start);
+}
+
 static void __io_flush_timeouts(struct io_ring_ctx *ctx)
 {
+	u64 start_seq = ctx->cq_seq;
+
+
+	/*
+	 * Batched CQ commit may have left some pending timeouts sequences
+	 * behind @cq_sqe. Look back to find them. Note, that sqe->off is u32,
+	 * and it uses u64 to not falsely trigger timeouts with large off.
+	 */
+	start_seq -= UINT_MAX;
 	do {
 		struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
 							struct io_kiocb, list);
 
 		if (req->flags & REQ_F_TIMEOUT_NOSEQ)
 			break;
-		if (req->timeout.target_seq != ctx->cached_cq_tail
-					- atomic_read(&ctx->cq_timeouts))
+		if (!io_check_in_range(req->timeout.target_seq, start_seq,
+					ctx->cq_seq))
 			break;
 
 		list_del_init(&req->list);
@@ -1143,6 +1160,7 @@ static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
 		return NULL;
 
 	ctx->cached_cq_tail++;
+	ctx->cq_seq++;
 	return &rings->cqes[tail & ctx->cq_mask];
 }
 
@@ -4537,6 +4555,8 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
 	atomic_inc(&ctx->cq_timeouts);
 
 	spin_lock_irqsave(&ctx->completion_lock, flags);
+	ctx->cq_seq--;
+
 	/*
 	 * We could be racing with timeout deletion. If the list is empty,
 	 * then timeout lookup already found it and will be handling it.
@@ -4660,7 +4680,7 @@ static int io_timeout(struct io_kiocb *req)
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_timeout_data *data = &req->io->timeout;
 	struct list_head *entry;
-	u32 tail, off = req->timeout.off;
+	u32 off = req->timeout.off;
 
 	spin_lock_irq(&ctx->completion_lock);
 
@@ -4675,8 +4695,7 @@ static int io_timeout(struct io_kiocb *req)
 		goto add;
 	}
 
-	tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
-	req->timeout.target_seq = tail + off;
+	req->timeout.target_seq = ctx->cq_seq + off;
 
 	/*
 	 * Insertion sort, ensuring the first entry in the list is always
@@ -4684,7 +4703,7 @@ static int io_timeout(struct io_kiocb *req)
 	 */
 	list_for_each_prev(entry, &ctx->timeout_list) {
 		struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, list);
-		u32 nxt_off = nxt->timeout.target_seq - tail;
+		u32 nxt_off = (u32)(nxt->timeout.target_seq - ctx->cq_seq);
 
 		if (!(nxt->flags & REQ_F_TIMEOUT_NOSEQ) && (off >= nxt_off))
 			break;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/5] timeout fixes
  2020-05-01  9:38   ` Pavel Begunkov
@ 2020-05-01  4:26     ` Jens Axboe
  2020-05-01 13:55       ` Pavel Begunkov
  0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2020-05-01  4:26 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-kernel

On 5/1/20 3:38 AM, Pavel Begunkov wrote:
> On 01/05/2020 11:21, Pavel Begunkov wrote:
>> On 30/04/2020 22:31, Pavel Begunkov wrote:
>>> [1,2] are small random patches.
>>> [3,4] are the last 2 timeout patches, but with 1 var renamed.
>>> [5] fixes a timeout problem related to batched CQ commits. From
>>> what I see, this should be the last fixing timeouts.
>>
>> Something gone wrong with testing or rebasing. Never mind this.
> 
> io_uring-5.7 hangs the first test in link_timeout.c. I'll debug it today,
> but by any chance, does anyone happen to know something?

That's not your stuff, see:

https://lore.kernel.org/linux-fsdevel/269ef3a5-e30f-ceeb-5f5e-58563e7c5367@kernel.dk/T/#ma61d47f59eaaa7f04ae686c117fab69c957e0d7d

which then just turned into a modification to a patch in io_uring-5.7
instead. Just force rebase that branch and it should work fine.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/5] timeout fixes
  2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
                   ` (4 preceding siblings ...)
  2020-04-30 19:31 ` [PATCH 5/5] io_uring: fix timeout offset with batch CQ commit Pavel Begunkov
@ 2020-05-01  8:21 ` Pavel Begunkov
  2020-05-01  9:38   ` Pavel Begunkov
  5 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2020-05-01  8:21 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

On 30/04/2020 22:31, Pavel Begunkov wrote:
> [1,2] are small random patches.
> [3,4] are the last 2 timeout patches, but with 1 var renamed.
> [5] fixes a timeout problem related to batched CQ commits. From
> what I see, this should be the last fixing timeouts.

Something gone wrong with testing or rebasing. Never mind this.

> 
> Pavel Begunkov (5):
>   io_uring: check non-sync defer_list carefully
>   io_uring: pass nxt from sync_file_range()
>   io_uring: trigger timeout after any sqe->off CQEs
>   io_uring: don't trigger timeout with another t-out
>   io_uring: fix timeout offset with batch CQ commit
> 
>  fs/io_uring.c | 130 +++++++++++++++++++++-----------------------------
>  1 file changed, 54 insertions(+), 76 deletions(-)
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/5] timeout fixes
  2020-05-01  8:21 ` [PATCH 0/5] timeout fixes Pavel Begunkov
@ 2020-05-01  9:38   ` Pavel Begunkov
  2020-05-01  4:26     ` Jens Axboe
  0 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2020-05-01  9:38 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

On 01/05/2020 11:21, Pavel Begunkov wrote:
> On 30/04/2020 22:31, Pavel Begunkov wrote:
>> [1,2] are small random patches.
>> [3,4] are the last 2 timeout patches, but with 1 var renamed.
>> [5] fixes a timeout problem related to batched CQ commits. From
>> what I see, this should be the last fixing timeouts.
> 
> Something gone wrong with testing or rebasing. Never mind this.

io_uring-5.7 hangs the first test in link_timeout.c. I'll debug it today,
but by any chance, does anyone happen to know something?

> 
>>
>> Pavel Begunkov (5):
>>   io_uring: check non-sync defer_list carefully
>>   io_uring: pass nxt from sync_file_range()
>>   io_uring: trigger timeout after any sqe->off CQEs
>>   io_uring: don't trigger timeout with another t-out
>>   io_uring: fix timeout offset with batch CQ commit
>>
>>  fs/io_uring.c | 130 +++++++++++++++++++++-----------------------------
>>  1 file changed, 54 insertions(+), 76 deletions(-)
>>
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/5] timeout fixes
  2020-05-01  4:26     ` Jens Axboe
@ 2020-05-01 13:55       ` Pavel Begunkov
  0 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-05-01 13:55 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

On 01/05/2020 07:26, Jens Axboe wrote:
> On 5/1/20 3:38 AM, Pavel Begunkov wrote:
>> On 01/05/2020 11:21, Pavel Begunkov wrote:
>>> On 30/04/2020 22:31, Pavel Begunkov wrote:
>>>> [1,2] are small random patches.
>>>> [3,4] are the last 2 timeout patches, but with 1 var renamed.
>>>> [5] fixes a timeout problem related to batched CQ commits. From
>>>> what I see, this should be the last fixing timeouts.
>>>
>>> Something gone wrong with testing or rebasing. Never mind this.
>>
>> io_uring-5.7 hangs the first test in link_timeout.c. I'll debug it today,
>> but by any chance, does anyone happen to know something?
> 

Yeah, just found the culprit myself

> That's not your stuff, see:
> 
> https://lore.kernel.org/linux-fsdevel/269ef3a5-e30f-ceeb-5f5e-58563e7c5367@kernel.dk/T/#ma61d47f59eaaa7f04ae686c117fab69c957e0d7d
> 
> which then just turned into a modification to a patch in io_uring-5.7
> instead. Just force rebase that branch and it should work fine.

Got it, thanks

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-05-01 13:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-30 19:31 [PATCH 0/5] timeout fixes Pavel Begunkov
2020-04-30 19:31 ` [PATCH 1/5] io_uring: check non-sync defer_list carefully Pavel Begunkov
2020-04-30 19:31 ` [PATCH 2/5] io_uring: pass nxt from sync_file_range() Pavel Begunkov
2020-04-30 19:31 ` [PATCH 3/5] io_uring: trigger timeout after any sqe->off CQEs Pavel Begunkov
2020-04-30 19:31 ` [PATCH 4/5] io_uring: don't trigger timeout with another t-out Pavel Begunkov
2020-04-30 19:31 ` [PATCH 5/5] io_uring: fix timeout offset with batch CQ commit Pavel Begunkov
2020-05-01  8:21 ` [PATCH 0/5] timeout fixes Pavel Begunkov
2020-05-01  9:38   ` Pavel Begunkov
2020-05-01  4:26     ` Jens Axboe
2020-05-01 13:55       ` Pavel Begunkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.