linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
@ 2022-04-28 12:08 Yu Kuai
  2022-04-28 12:08 ` [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group Yu Kuai
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: Yu Kuai @ 2022-04-28 12:08 UTC (permalink / raw)
  To: jack, paolo.valente, axboe, tj
  Cc: linux-block, cgroups, linux-kernel, yukuai3, yi.zhang

Changes in v5:
 - rename bfq_add_busy_queues() to bfq_inc_busy_queues() in patch 1
 - fix wrong definition in patch 1
 - fix spelling mistake in patch 2: leaset -> least
 - update comments in patch 3
 - add reviewed-by tag in patch 2,3

Changes in v4:
 - split bfq_update_busy_queues() to bfq_add/dec_busy_queues(),
   suggested by Jan Kara.
 - remove unused 'in_groups_with_pending_reqs',

Changes in v3:
 - remove the cleanup patch that is irrelevant now(I'll post it
   separately).
 - instead of hacking wr queues and using weights tree insertion/removal,
   using bfq_add/del_bfqq_busy() to count the number of groups
   (suggested by Jan Kara).

Changes in v2:
 - Use a different approch to count root group, which is much simple.

Currently, bfq can't handle sync io concurrently as long as they
are not issued from root group. This is because
'bfqd->num_groups_with_pending_reqs > 0' is always true in
bfq_asymmetric_scenario().

The way that bfqg is counted into 'num_groups_with_pending_reqs':

Before this patchset:
 1) root group will never be counted.
 2) Count if bfqg or it's child bfqgs have pending requests.
 3) Don't count if bfqg and it's child bfqgs complete all the requests.

After this patchset:
 1) root group is counted.
 2) Count if bfqg have at least one bfqq that is marked busy.
 3) Don't count if bfqg doesn't have any busy bfqqs.

The main reason to use busy state of bfqq instead of 'pending requests'
is that bfqq can stay busy after dispatching the last request if idling
is needed for service guarantees.

With the above changes, concurrent sync io can be supported if only
one group is activated.

fio test script(startdelay is used to avoid queue merging):
[global]
filename=/dev/nvme0n1
allow_mounted_write=0
ioengine=psync
direct=1
ioscheduler=bfq
offset_increment=10g
group_reporting
rw=randwrite
bs=4k

[test1]
numjobs=1

[test2]
startdelay=1
numjobs=1

[test3]
startdelay=2
numjobs=1

[test4]
startdelay=3
numjobs=1

[test5]
startdelay=4
numjobs=1

[test6]
startdelay=5
numjobs=1

[test7]
startdelay=6
numjobs=1

[test8]
startdelay=7
numjobs=1

test result:
running fio on root cgroup
v5.18-rc1:	   550 Mib/s
v5.18-rc1-patched: 550 Mib/s

running fio on non-root cgroup
v5.18-rc1:	   349 Mib/s
v5.18-rc1-patched: 550 Mib/s

Note that I also test null_blk with "irqmode=2
completion_nsec=100000000(100ms) hw_queue_depth=1", and tests show
that service guarantees are still preserved.

Previous versions:
RFC: https://lore.kernel.org/all/20211127101132.486806-1-yukuai3@huawei.com/
v1: https://lore.kernel.org/all/20220305091205.4188398-1-yukuai3@huawei.com/
v2: https://lore.kernel.org/all/20220416093753.3054696-1-yukuai3@huawei.com/
v3: https://lore.kernel.org/all/20220427124722.48465-1-yukuai3@huawei.com/
v4: https://lore.kernel.org/all/20220428111907.3635820-1-yukuai3@huawei.com/

Yu Kuai (3):
  block, bfq: record how many queues are busy in bfq_group
  block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
  block, bfq: do not idle if only one group is activated

 block/bfq-cgroup.c  |  1 +
 block/bfq-iosched.c | 48 +++-----------------------------------
 block/bfq-iosched.h | 57 +++++++--------------------------------------
 block/bfq-wf2q.c    | 35 +++++++++++++++++-----------
 4 files changed, 35 insertions(+), 106 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group
  2022-04-28 12:08 [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion Yu Kuai
@ 2022-04-28 12:08 ` Yu Kuai
  2022-04-28 12:45   ` Jan Kara
  2022-04-28 12:08 ` [PATCH -next v5 2/3] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-04-28 12:08 UTC (permalink / raw)
  To: jack, paolo.valente, axboe, tj
  Cc: linux-block, cgroups, linux-kernel, yukuai3, yi.zhang

Prepare to refactor the counting of 'num_groups_with_pending_reqs'.

Add a counter 'busy_queues' in bfq_group, and update it in
bfq_add/del_bfqq_busy().

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/bfq-cgroup.c  |  1 +
 block/bfq-iosched.h |  2 ++
 block/bfq-wf2q.c    | 20 ++++++++++++++++++++
 3 files changed, 23 insertions(+)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 09574af83566..4d516879d9fa 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
 				   */
 	bfqg->bfqd = bfqd;
 	bfqg->active_entities = 0;
+	bfqg->busy_queues = 0;
 	bfqg->online = true;
 	bfqg->rq_pos_tree = RB_ROOT;
 }
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 978ef5d6fe6a..3847f4ab77ac 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -906,6 +906,7 @@ struct bfq_group_data {
  *                   are groups with more than one active @bfq_entity
  *                   (see the comments to the function
  *                   bfq_bfqq_may_idle()).
+ * @busy_queues: number of busy bfqqs.
  * @rq_pos_tree: rbtree sorted by next_request position, used when
  *               determining if two or more queues have interleaving
  *               requests (see bfq_find_close_cooperator()).
@@ -942,6 +943,7 @@ struct bfq_group {
 	struct bfq_entity *my_entity;
 
 	int active_entities;
+	int busy_queues;
 
 	struct rb_root rq_pos_tree;
 
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index f8eb340381cf..d9ff33e0be38 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -218,6 +218,16 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
 	return false;
 }
 
+static void bfq_inc_busy_queues(struct bfq_queue *bfqq)
+{
+	bfqq_group(bfqq)->busy_queues++;
+}
+
+static void bfq_dec_busy_queues(struct bfq_queue *bfqq)
+{
+	bfqq_group(bfqq)->busy_queues--;
+}
+
 #else /* CONFIG_BFQ_GROUP_IOSCHED */
 
 static bool bfq_update_parent_budget(struct bfq_entity *next_in_service)
@@ -230,6 +240,14 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
 	return true;
 }
 
+static void bfq_inc_busy_queues(struct bfq_queue *bfqq)
+{
+}
+
+static void bfq_dec_busy_queues(struct bfq_queue *bfqq)
+{
+}
+
 #endif /* CONFIG_BFQ_GROUP_IOSCHED */
 
 /*
@@ -1660,6 +1678,7 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 	bfq_clear_bfqq_busy(bfqq);
 
 	bfqd->busy_queues[bfqq->ioprio_class - 1]--;
+	bfq_inc_busy_queues(bfqq);
 
 	if (bfqq->wr_coeff > 1)
 		bfqd->wr_busy_queues--;
@@ -1683,6 +1702,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 
 	bfq_mark_bfqq_busy(bfqq);
 	bfqd->busy_queues[bfqq->ioprio_class - 1]++;
+	bfq_dec_busy_queues(bfqq);
 
 	if (!bfqq->dispatched)
 		if (bfqq->wr_coeff == 1)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v5 2/3] block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
  2022-04-28 12:08 [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion Yu Kuai
  2022-04-28 12:08 ` [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group Yu Kuai
@ 2022-04-28 12:08 ` Yu Kuai
  2022-04-28 12:08 ` [PATCH -next v5 3/3] block, bfq: do not idle if only one group is activated Yu Kuai
  2022-05-05  1:00 ` [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion yukuai (C)
  3 siblings, 0 replies; 26+ messages in thread
From: Yu Kuai @ 2022-04-28 12:08 UTC (permalink / raw)
  To: jack, paolo.valente, axboe, tj
  Cc: linux-block, cgroups, linux-kernel, yukuai3, yi.zhang

Currently, bfq can't handle sync io concurrently as long as they
are not issued from root group. This is because
'bfqd->num_groups_with_pending_reqs > 0' is always true in
bfq_asymmetric_scenario().

The way that bfqg is counted into 'num_groups_with_pending_reqs':

Before this patch:
 1) root group will never be counted.
 2) Count if bfqg or it's child bfqgs have pending requests.
 3) Don't count if bfqg and it's child bfqgs complete all the requests.

After this patch:
 1) root group is counted.
 2) Count if bfqg have at least one bfqq that is marked busy.
 3) Don't count if bfqg doesn't have any busy bfqqs.

The main reason to use busy state of bfqq instead of 'pending requests'
is that bfqq can stay busy after dispatching the last request if idling
is needed for service guarantees.

With this change, the occasion that only one group is activated can be
detected, and next patch will support concurrent sync io in the
occasion.

This patch also rename 'num_groups_with_pending_reqs' to
'num_groups_with_busy_queues'.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 block/bfq-iosched.c | 46 ++-----------------------------------
 block/bfq-iosched.h | 55 ++++++---------------------------------------
 block/bfq-wf2q.c    | 19 ++++------------
 3 files changed, 13 insertions(+), 107 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index e47c75f1fa0f..609b4e894684 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -844,7 +844,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
 
 	return varied_queue_weights || multiple_classes_busy
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
-	       || bfqd->num_groups_with_pending_reqs > 0
+	       || bfqd->num_groups_with_busy_queues > 0
 #endif
 		;
 }
@@ -962,48 +962,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
 void bfq_weights_tree_remove(struct bfq_data *bfqd,
 			     struct bfq_queue *bfqq)
 {
-	struct bfq_entity *entity = bfqq->entity.parent;
-
-	for_each_entity(entity) {
-		struct bfq_sched_data *sd = entity->my_sched_data;
-
-		if (sd->next_in_service || sd->in_service_entity) {
-			/*
-			 * entity is still active, because either
-			 * next_in_service or in_service_entity is not
-			 * NULL (see the comments on the definition of
-			 * next_in_service for details on why
-			 * in_service_entity must be checked too).
-			 *
-			 * As a consequence, its parent entities are
-			 * active as well, and thus this loop must
-			 * stop here.
-			 */
-			break;
-		}
-
-		/*
-		 * The decrement of num_groups_with_pending_reqs is
-		 * not performed immediately upon the deactivation of
-		 * entity, but it is delayed to when it also happens
-		 * that the first leaf descendant bfqq of entity gets
-		 * all its pending requests completed. The following
-		 * instructions perform this delayed decrement, if
-		 * needed. See the comments on
-		 * num_groups_with_pending_reqs for details.
-		 */
-		if (entity->in_groups_with_pending_reqs) {
-			entity->in_groups_with_pending_reqs = false;
-			bfqd->num_groups_with_pending_reqs--;
-		}
-	}
-
-	/*
-	 * Next function is invoked last, because it causes bfqq to be
-	 * freed if the following holds: bfqq is not in service and
-	 * has no dispatched request. DO NOT use bfqq after the next
-	 * function invocation.
-	 */
 	__bfq_weights_tree_remove(bfqd, bfqq,
 				  &bfqd->queue_weights_tree);
 }
@@ -7107,7 +7065,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
 	bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
 
 	bfqd->queue_weights_tree = RB_ROOT_CACHED;
-	bfqd->num_groups_with_pending_reqs = 0;
+	bfqd->num_groups_with_busy_queues = 0;
 
 	INIT_LIST_HEAD(&bfqd->active_list);
 	INIT_LIST_HEAD(&bfqd->idle_list);
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 3847f4ab77ac..b71a088a7f1d 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -197,9 +197,6 @@ struct bfq_entity {
 	/* flag, set to request a weight, ioprio or ioprio_class change  */
 	int prio_changed;
 
-	/* flag, set if the entity is counted in groups_with_pending_reqs */
-	bool in_groups_with_pending_reqs;
-
 	/* last child queue of entity created (for non-leaf entities) */
 	struct bfq_queue *last_bfqq_created;
 };
@@ -495,52 +492,14 @@ struct bfq_data {
 	struct rb_root_cached queue_weights_tree;
 
 	/*
-	 * Number of groups with at least one descendant process that
-	 * has at least one request waiting for completion. Note that
-	 * this accounts for also requests already dispatched, but not
-	 * yet completed. Therefore this number of groups may differ
-	 * (be larger) than the number of active groups, as a group is
-	 * considered active only if its corresponding entity has
-	 * descendant queues with at least one request queued. This
-	 * number is used to decide whether a scenario is symmetric.
-	 * For a detailed explanation see comments on the computation
-	 * of the variable asymmetric_scenario in the function
-	 * bfq_better_to_idle().
-	 *
-	 * However, it is hard to compute this number exactly, for
-	 * groups with multiple descendant processes. Consider a group
-	 * that is inactive, i.e., that has no descendant process with
-	 * pending I/O inside BFQ queues. Then suppose that
-	 * num_groups_with_pending_reqs is still accounting for this
-	 * group, because the group has descendant processes with some
-	 * I/O request still in flight. num_groups_with_pending_reqs
-	 * should be decremented when the in-flight request of the
-	 * last descendant process is finally completed (assuming that
-	 * nothing else has changed for the group in the meantime, in
-	 * terms of composition of the group and active/inactive state of child
-	 * groups and processes). To accomplish this, an additional
-	 * pending-request counter must be added to entities, and must
-	 * be updated correctly. To avoid this additional field and operations,
-	 * we resort to the following tradeoff between simplicity and
-	 * accuracy: for an inactive group that is still counted in
-	 * num_groups_with_pending_reqs, we decrement
-	 * num_groups_with_pending_reqs when the first descendant
-	 * process of the group remains with no request waiting for
-	 * completion.
-	 *
-	 * Even this simpler decrement strategy requires a little
-	 * carefulness: to avoid multiple decrements, we flag a group,
-	 * more precisely an entity representing a group, as still
-	 * counted in num_groups_with_pending_reqs when it becomes
-	 * inactive. Then, when the first descendant queue of the
-	 * entity remains with no request waiting for completion,
-	 * num_groups_with_pending_reqs is decremented, and this flag
-	 * is reset. After this flag is reset for the entity,
-	 * num_groups_with_pending_reqs won't be decremented any
-	 * longer in case a new descendant queue of the entity remains
-	 * with no request waiting for completion.
+	 * Number of groups with at least one bfqq that is marked busy,
+	 * and this number is used to decide whether a scenario is symmetric.
+	 * Note that bfqq is busy doesn't mean that the bfqq contains requests.
+	 * If idling is needed for service guarantees, bfqq will stay busy
+	 * after dispatching the last request, see details in
+	 * __bfq_bfqq_expire().
 	 */
-	unsigned int num_groups_with_pending_reqs;
+	unsigned int num_groups_with_busy_queues;
 
 	/*
 	 * Per-class (RT, BE, IDLE) number of bfq_queues containing
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index d9ff33e0be38..42464e6ff40c 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -220,12 +220,14 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
 
 static void bfq_inc_busy_queues(struct bfq_queue *bfqq)
 {
-	bfqq_group(bfqq)->busy_queues++;
+	if (!(bfqq_group(bfqq)->busy_queues++))
+		bfqq->bfqd->num_groups_with_busy_queues++;
 }
 
 static void bfq_dec_busy_queues(struct bfq_queue *bfqq)
 {
-	bfqq_group(bfqq)->busy_queues--;
+	if (!(--bfqq_group(bfqq)->busy_queues))
+		bfqq->bfqd->num_groups_with_busy_queues--;
 }
 
 #else /* CONFIG_BFQ_GROUP_IOSCHED */
@@ -1002,19 +1004,6 @@ static void __bfq_activate_entity(struct bfq_entity *entity,
 		entity->on_st_or_in_serv = true;
 	}
 
-#ifdef CONFIG_BFQ_GROUP_IOSCHED
-	if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */
-		struct bfq_group *bfqg =
-			container_of(entity, struct bfq_group, entity);
-		struct bfq_data *bfqd = bfqg->bfqd;
-
-		if (!entity->in_groups_with_pending_reqs) {
-			entity->in_groups_with_pending_reqs = true;
-			bfqd->num_groups_with_pending_reqs++;
-		}
-	}
-#endif
-
 	bfq_update_fin_time_enqueue(entity, st, backshifted);
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v5 3/3] block, bfq: do not idle if only one group is activated
  2022-04-28 12:08 [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion Yu Kuai
  2022-04-28 12:08 ` [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group Yu Kuai
  2022-04-28 12:08 ` [PATCH -next v5 2/3] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
@ 2022-04-28 12:08 ` Yu Kuai
  2022-05-05  1:00 ` [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion yukuai (C)
  3 siblings, 0 replies; 26+ messages in thread
From: Yu Kuai @ 2022-04-28 12:08 UTC (permalink / raw)
  To: jack, paolo.valente, axboe, tj
  Cc: linux-block, cgroups, linux-kernel, yukuai3, yi.zhang

Now that root group is counted into 'num_groups_with_busy_queues',
'num_groups_with_busy_queues > 0' is always true in
bfq_asymmetric_scenario(). Thus change the condition to '> 1'.

On the other hand, this change can enable concurrent sync io if only
one group is activated.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 block/bfq-iosched.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 609b4e894684..142e1ca4600f 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -812,7 +812,7 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
  * much easier to maintain the needed state:
  * 1) all active queues have the same weight,
  * 2) all active queues belong to the same I/O-priority class,
- * 3) there are no active groups.
+ * 3) there are one active group at most.
  * In particular, the last condition is always true if hierarchical
  * support or the cgroups interface are not enabled, thus no state
  * needs to be maintained in this case.
@@ -844,7 +844,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
 
 	return varied_queue_weights || multiple_classes_busy
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
-	       || bfqd->num_groups_with_busy_queues > 0
+	       || bfqd->num_groups_with_busy_queues > 1
 #endif
 		;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group
  2022-04-28 12:08 ` [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group Yu Kuai
@ 2022-04-28 12:45   ` Jan Kara
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Kara @ 2022-04-28 12:45 UTC (permalink / raw)
  To: Yu Kuai
  Cc: jack, paolo.valente, axboe, tj, linux-block, cgroups,
	linux-kernel, yi.zhang

On Thu 28-04-22 20:08:35, Yu Kuai wrote:
> Prepare to refactor the counting of 'num_groups_with_pending_reqs'.
> 
> Add a counter 'busy_queues' in bfq_group, and update it in
> bfq_add/del_bfqq_busy().
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/bfq-cgroup.c  |  1 +
>  block/bfq-iosched.h |  2 ++
>  block/bfq-wf2q.c    | 20 ++++++++++++++++++++
>  3 files changed, 23 insertions(+)
> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index 09574af83566..4d516879d9fa 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
>  				   */
>  	bfqg->bfqd = bfqd;
>  	bfqg->active_entities = 0;
> +	bfqg->busy_queues = 0;
>  	bfqg->online = true;
>  	bfqg->rq_pos_tree = RB_ROOT;
>  }
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index 978ef5d6fe6a..3847f4ab77ac 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -906,6 +906,7 @@ struct bfq_group_data {
>   *                   are groups with more than one active @bfq_entity
>   *                   (see the comments to the function
>   *                   bfq_bfqq_may_idle()).
> + * @busy_queues: number of busy bfqqs.
>   * @rq_pos_tree: rbtree sorted by next_request position, used when
>   *               determining if two or more queues have interleaving
>   *               requests (see bfq_find_close_cooperator()).
> @@ -942,6 +943,7 @@ struct bfq_group {
>  	struct bfq_entity *my_entity;
>  
>  	int active_entities;
> +	int busy_queues;
>  
>  	struct rb_root rq_pos_tree;
>  
> diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
> index f8eb340381cf..d9ff33e0be38 100644
> --- a/block/bfq-wf2q.c
> +++ b/block/bfq-wf2q.c
> @@ -218,6 +218,16 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
>  	return false;
>  }
>  
> +static void bfq_inc_busy_queues(struct bfq_queue *bfqq)
> +{
> +	bfqq_group(bfqq)->busy_queues++;
> +}
> +
> +static void bfq_dec_busy_queues(struct bfq_queue *bfqq)
> +{
> +	bfqq_group(bfqq)->busy_queues--;
> +}
> +
>  #else /* CONFIG_BFQ_GROUP_IOSCHED */
>  
>  static bool bfq_update_parent_budget(struct bfq_entity *next_in_service)
> @@ -230,6 +240,14 @@ static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
>  	return true;
>  }
>  
> +static void bfq_inc_busy_queues(struct bfq_queue *bfqq)
> +{
> +}
> +
> +static void bfq_dec_busy_queues(struct bfq_queue *bfqq)
> +{
> +}
> +
>  #endif /* CONFIG_BFQ_GROUP_IOSCHED */
>  
>  /*
> @@ -1660,6 +1678,7 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
>  	bfq_clear_bfqq_busy(bfqq);
>  
>  	bfqd->busy_queues[bfqq->ioprio_class - 1]--;
> +	bfq_inc_busy_queues(bfqq);
>  
>  	if (bfqq->wr_coeff > 1)
>  		bfqd->wr_busy_queues--;
> @@ -1683,6 +1702,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
>  
>  	bfq_mark_bfqq_busy(bfqq);
>  	bfqd->busy_queues[bfqq->ioprio_class - 1]++;
> +	bfq_dec_busy_queues(bfqq);
>  
>  	if (!bfqq->dispatched)
>  		if (bfqq->wr_coeff == 1)
> -- 
> 2.31.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-04-28 12:08 [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion Yu Kuai
                   ` (2 preceding siblings ...)
  2022-04-28 12:08 ` [PATCH -next v5 3/3] block, bfq: do not idle if only one group is activated Yu Kuai
@ 2022-05-05  1:00 ` yukuai (C)
  2022-05-14  9:29   ` yukuai (C)
  3 siblings, 1 reply; 26+ messages in thread
From: yukuai (C) @ 2022-05-05  1:00 UTC (permalink / raw)
  To: paolo.valente, axboe
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

Hi, Paolo

Can you take a look at this patchset? It has been quite a long time
since we spotted this problem...

Thanks,
Kuai

在 2022/04/28 20:08, Yu Kuai 写道:
> Changes in v5:
>   - rename bfq_add_busy_queues() to bfq_inc_busy_queues() in patch 1
>   - fix wrong definition in patch 1
>   - fix spelling mistake in patch 2: leaset -> least
>   - update comments in patch 3
>   - add reviewed-by tag in patch 2,3
> 
> Changes in v4:
>   - split bfq_update_busy_queues() to bfq_add/dec_busy_queues(),
>     suggested by Jan Kara.
>   - remove unused 'in_groups_with_pending_reqs',
> 
> Changes in v3:
>   - remove the cleanup patch that is irrelevant now(I'll post it
>     separately).
>   - instead of hacking wr queues and using weights tree insertion/removal,
>     using bfq_add/del_bfqq_busy() to count the number of groups
>     (suggested by Jan Kara).
> 
> Changes in v2:
>   - Use a different approch to count root group, which is much simple.
> 
> Currently, bfq can't handle sync io concurrently as long as they
> are not issued from root group. This is because
> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
> bfq_asymmetric_scenario().
> 
> The way that bfqg is counted into 'num_groups_with_pending_reqs':
> 
> Before this patchset:
>   1) root group will never be counted.
>   2) Count if bfqg or it's child bfqgs have pending requests.
>   3) Don't count if bfqg and it's child bfqgs complete all the requests.
> 
> After this patchset:
>   1) root group is counted.
>   2) Count if bfqg have at least one bfqq that is marked busy.
>   3) Don't count if bfqg doesn't have any busy bfqqs.
> 
> The main reason to use busy state of bfqq instead of 'pending requests'
> is that bfqq can stay busy after dispatching the last request if idling
> is needed for service guarantees.
> 
> With the above changes, concurrent sync io can be supported if only
> one group is activated.
> 
> fio test script(startdelay is used to avoid queue merging):
> [global]
> filename=/dev/nvme0n1
> allow_mounted_write=0
> ioengine=psync
> direct=1
> ioscheduler=bfq
> offset_increment=10g
> group_reporting
> rw=randwrite
> bs=4k
> 
> [test1]
> numjobs=1
> 
> [test2]
> startdelay=1
> numjobs=1
> 
> [test3]
> startdelay=2
> numjobs=1
> 
> [test4]
> startdelay=3
> numjobs=1
> 
> [test5]
> startdelay=4
> numjobs=1
> 
> [test6]
> startdelay=5
> numjobs=1
> 
> [test7]
> startdelay=6
> numjobs=1
> 
> [test8]
> startdelay=7
> numjobs=1
> 
> test result:
> running fio on root cgroup
> v5.18-rc1:	   550 Mib/s
> v5.18-rc1-patched: 550 Mib/s
> 
> running fio on non-root cgroup
> v5.18-rc1:	   349 Mib/s
> v5.18-rc1-patched: 550 Mib/s
> 
> Note that I also test null_blk with "irqmode=2
> completion_nsec=100000000(100ms) hw_queue_depth=1", and tests show
> that service guarantees are still preserved.
> 
> Previous versions:
> RFC: https://lore.kernel.org/all/20211127101132.486806-1-yukuai3@huawei.com/
> v1: https://lore.kernel.org/all/20220305091205.4188398-1-yukuai3@huawei.com/
> v2: https://lore.kernel.org/all/20220416093753.3054696-1-yukuai3@huawei.com/
> v3: https://lore.kernel.org/all/20220427124722.48465-1-yukuai3@huawei.com/
> v4: https://lore.kernel.org/all/20220428111907.3635820-1-yukuai3@huawei.com/
> 
> Yu Kuai (3):
>    block, bfq: record how many queues are busy in bfq_group
>    block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
>    block, bfq: do not idle if only one group is activated
> 
>   block/bfq-cgroup.c  |  1 +
>   block/bfq-iosched.c | 48 +++-----------------------------------
>   block/bfq-iosched.h | 57 +++++++--------------------------------------
>   block/bfq-wf2q.c    | 35 +++++++++++++++++-----------
>   4 files changed, 35 insertions(+), 106 deletions(-)
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-05  1:00 ` [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion yukuai (C)
@ 2022-05-14  9:29   ` yukuai (C)
  2022-05-21  7:22     ` yukuai (C)
  0 siblings, 1 reply; 26+ messages in thread
From: yukuai (C) @ 2022-05-14  9:29 UTC (permalink / raw)
  To: paolo.valente, axboe
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

在 2022/05/05 9:00, yukuai (C) 写道:
> Hi, Paolo
> 
> Can you take a look at this patchset? It has been quite a long time
> since we spotted this problem...
> 

friendly ping ...
> Thanks,
> Kuai
> 
> 在 2022/04/28 20:08, Yu Kuai 写道:
>> Changes in v5:
>>   - rename bfq_add_busy_queues() to bfq_inc_busy_queues() in patch 1
>>   - fix wrong definition in patch 1
>>   - fix spelling mistake in patch 2: leaset -> least
>>   - update comments in patch 3
>>   - add reviewed-by tag in patch 2,3
>>
>> Changes in v4:
>>   - split bfq_update_busy_queues() to bfq_add/dec_busy_queues(),
>>     suggested by Jan Kara.
>>   - remove unused 'in_groups_with_pending_reqs',
>>
>> Changes in v3:
>>   - remove the cleanup patch that is irrelevant now(I'll post it
>>     separately).
>>   - instead of hacking wr queues and using weights tree 
>> insertion/removal,
>>     using bfq_add/del_bfqq_busy() to count the number of groups
>>     (suggested by Jan Kara).
>>
>> Changes in v2:
>>   - Use a different approch to count root group, which is much simple.
>>
>> Currently, bfq can't handle sync io concurrently as long as they
>> are not issued from root group. This is because
>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>> bfq_asymmetric_scenario().
>>
>> The way that bfqg is counted into 'num_groups_with_pending_reqs':
>>
>> Before this patchset:
>>   1) root group will never be counted.
>>   2) Count if bfqg or it's child bfqgs have pending requests.
>>   3) Don't count if bfqg and it's child bfqgs complete all the requests.
>>
>> After this patchset:
>>   1) root group is counted.
>>   2) Count if bfqg have at least one bfqq that is marked busy.
>>   3) Don't count if bfqg doesn't have any busy bfqqs.
>>
>> The main reason to use busy state of bfqq instead of 'pending requests'
>> is that bfqq can stay busy after dispatching the last request if idling
>> is needed for service guarantees.
>>
>> With the above changes, concurrent sync io can be supported if only
>> one group is activated.
>>
>> fio test script(startdelay is used to avoid queue merging):
>> [global]
>> filename=/dev/nvme0n1
>> allow_mounted_write=0
>> ioengine=psync
>> direct=1
>> ioscheduler=bfq
>> offset_increment=10g
>> group_reporting
>> rw=randwrite
>> bs=4k
>>
>> [test1]
>> numjobs=1
>>
>> [test2]
>> startdelay=1
>> numjobs=1
>>
>> [test3]
>> startdelay=2
>> numjobs=1
>>
>> [test4]
>> startdelay=3
>> numjobs=1
>>
>> [test5]
>> startdelay=4
>> numjobs=1
>>
>> [test6]
>> startdelay=5
>> numjobs=1
>>
>> [test7]
>> startdelay=6
>> numjobs=1
>>
>> [test8]
>> startdelay=7
>> numjobs=1
>>
>> test result:
>> running fio on root cgroup
>> v5.18-rc1:       550 Mib/s
>> v5.18-rc1-patched: 550 Mib/s
>>
>> running fio on non-root cgroup
>> v5.18-rc1:       349 Mib/s
>> v5.18-rc1-patched: 550 Mib/s
>>
>> Note that I also test null_blk with "irqmode=2
>> completion_nsec=100000000(100ms) hw_queue_depth=1", and tests show
>> that service guarantees are still preserved.
>>
>> Previous versions:
>> RFC: 
>> https://lore.kernel.org/all/20211127101132.486806-1-yukuai3@huawei.com/
>> v1: 
>> https://lore.kernel.org/all/20220305091205.4188398-1-yukuai3@huawei.com/
>> v2: 
>> https://lore.kernel.org/all/20220416093753.3054696-1-yukuai3@huawei.com/
>> v3: 
>> https://lore.kernel.org/all/20220427124722.48465-1-yukuai3@huawei.com/
>> v4: 
>> https://lore.kernel.org/all/20220428111907.3635820-1-yukuai3@huawei.com/
>>
>> Yu Kuai (3):
>>    block, bfq: record how many queues are busy in bfq_group
>>    block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
>>    block, bfq: do not idle if only one group is activated
>>
>>   block/bfq-cgroup.c  |  1 +
>>   block/bfq-iosched.c | 48 +++-----------------------------------
>>   block/bfq-iosched.h | 57 +++++++--------------------------------------
>>   block/bfq-wf2q.c    | 35 +++++++++++++++++-----------
>>   4 files changed, 35 insertions(+), 106 deletions(-)
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-14  9:29   ` yukuai (C)
@ 2022-05-21  7:22     ` yukuai (C)
  2022-05-21 12:21       ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: yukuai (C) @ 2022-05-21  7:22 UTC (permalink / raw)
  To: paolo.valente, axboe
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

在 2022/05/14 17:29, yukuai (C) 写道:
> 在 2022/05/05 9:00, yukuai (C) 写道:
>> Hi, Paolo
>>
>> Can you take a look at this patchset? It has been quite a long time
>> since we spotted this problem...
>>
> 
> friendly ping ...
friendly ping ...
>> Thanks,
>> Kuai
>>
>> 在 2022/04/28 20:08, Yu Kuai 写道:
>>> Changes in v5:
>>>   - rename bfq_add_busy_queues() to bfq_inc_busy_queues() in patch 1
>>>   - fix wrong definition in patch 1
>>>   - fix spelling mistake in patch 2: leaset -> least
>>>   - update comments in patch 3
>>>   - add reviewed-by tag in patch 2,3
>>>
>>> Changes in v4:
>>>   - split bfq_update_busy_queues() to bfq_add/dec_busy_queues(),
>>>     suggested by Jan Kara.
>>>   - remove unused 'in_groups_with_pending_reqs',
>>>
>>> Changes in v3:
>>>   - remove the cleanup patch that is irrelevant now(I'll post it
>>>     separately).
>>>   - instead of hacking wr queues and using weights tree 
>>> insertion/removal,
>>>     using bfq_add/del_bfqq_busy() to count the number of groups
>>>     (suggested by Jan Kara).
>>>
>>> Changes in v2:
>>>   - Use a different approch to count root group, which is much simple.
>>>
>>> Currently, bfq can't handle sync io concurrently as long as they
>>> are not issued from root group. This is because
>>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
>>> bfq_asymmetric_scenario().
>>>
>>> The way that bfqg is counted into 'num_groups_with_pending_reqs':
>>>
>>> Before this patchset:
>>>   1) root group will never be counted.
>>>   2) Count if bfqg or it's child bfqgs have pending requests.
>>>   3) Don't count if bfqg and it's child bfqgs complete all the requests.
>>>
>>> After this patchset:
>>>   1) root group is counted.
>>>   2) Count if bfqg have at least one bfqq that is marked busy.
>>>   3) Don't count if bfqg doesn't have any busy bfqqs.
>>>
>>> The main reason to use busy state of bfqq instead of 'pending requests'
>>> is that bfqq can stay busy after dispatching the last request if idling
>>> is needed for service guarantees.
>>>
>>> With the above changes, concurrent sync io can be supported if only
>>> one group is activated.
>>>
>>> fio test script(startdelay is used to avoid queue merging):
>>> [global]
>>> filename=/dev/nvme0n1
>>> allow_mounted_write=0
>>> ioengine=psync
>>> direct=1
>>> ioscheduler=bfq
>>> offset_increment=10g
>>> group_reporting
>>> rw=randwrite
>>> bs=4k
>>>
>>> [test1]
>>> numjobs=1
>>>
>>> [test2]
>>> startdelay=1
>>> numjobs=1
>>>
>>> [test3]
>>> startdelay=2
>>> numjobs=1
>>>
>>> [test4]
>>> startdelay=3
>>> numjobs=1
>>>
>>> [test5]
>>> startdelay=4
>>> numjobs=1
>>>
>>> [test6]
>>> startdelay=5
>>> numjobs=1
>>>
>>> [test7]
>>> startdelay=6
>>> numjobs=1
>>>
>>> [test8]
>>> startdelay=7
>>> numjobs=1
>>>
>>> test result:
>>> running fio on root cgroup
>>> v5.18-rc1:       550 Mib/s
>>> v5.18-rc1-patched: 550 Mib/s
>>>
>>> running fio on non-root cgroup
>>> v5.18-rc1:       349 Mib/s
>>> v5.18-rc1-patched: 550 Mib/s
>>>
>>> Note that I also test null_blk with "irqmode=2
>>> completion_nsec=100000000(100ms) hw_queue_depth=1", and tests show
>>> that service guarantees are still preserved.
>>>
>>> Previous versions:
>>> RFC: 
>>> https://lore.kernel.org/all/20211127101132.486806-1-yukuai3@huawei.com/
>>> v1: 
>>> https://lore.kernel.org/all/20220305091205.4188398-1-yukuai3@huawei.com/
>>> v2: 
>>> https://lore.kernel.org/all/20220416093753.3054696-1-yukuai3@huawei.com/
>>> v3: 
>>> https://lore.kernel.org/all/20220427124722.48465-1-yukuai3@huawei.com/
>>> v4: 
>>> https://lore.kernel.org/all/20220428111907.3635820-1-yukuai3@huawei.com/
>>>
>>> Yu Kuai (3):
>>>    block, bfq: record how many queues are busy in bfq_group
>>>    block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
>>>    block, bfq: do not idle if only one group is activated
>>>
>>>   block/bfq-cgroup.c  |  1 +
>>>   block/bfq-iosched.c | 48 +++-----------------------------------
>>>   block/bfq-iosched.h | 57 +++++++--------------------------------------
>>>   block/bfq-wf2q.c    | 35 +++++++++++++++++-----------
>>>   4 files changed, 35 insertions(+), 106 deletions(-)
>>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-21  7:22     ` yukuai (C)
@ 2022-05-21 12:21       ` Jens Axboe
  2022-05-23  1:10         ` yukuai (C)
  0 siblings, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2022-05-21 12:21 UTC (permalink / raw)
  To: yukuai (C), paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

On 5/21/22 1:22 AM, yukuai (C) wrote:
> 在 2022/05/14 17:29, yukuai (C) 写道:
>> 在 2022/05/05 9:00, yukuai (C) 写道:
>>> Hi, Paolo
>>>
>>> Can you take a look at this patchset? It has been quite a long time
>>> since we spotted this problem...
>>>
>>
>> friendly ping ...
> friendly ping ...

I can't speak for Paolo, but I've mentioned before that the majority
of your messages end up in my spam. That's still the case, in fact
I just marked maybe 10 of them as not spam.

You really need to get this issued sorted out, or you will continue
to have patches ignore because folks may simply not see them.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-21 12:21       ` Jens Axboe
@ 2022-05-23  1:10         ` yukuai (C)
  2022-05-23  1:24           ` Jens Axboe
  2022-05-23  8:59           ` Jan Kara
  0 siblings, 2 replies; 26+ messages in thread
From: yukuai (C) @ 2022-05-23  1:10 UTC (permalink / raw)
  To: Jens Axboe, paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

在 2022/05/21 20:21, Jens Axboe 写道:
> On 5/21/22 1:22 AM, yukuai (C) wrote:
>> 在 2022/05/14 17:29, yukuai (C) 写道:
>>> 在 2022/05/05 9:00, yukuai (C) 写道:
>>>> Hi, Paolo
>>>>
>>>> Can you take a look at this patchset? It has been quite a long time
>>>> since we spotted this problem...
>>>>
>>>
>>> friendly ping ...
>> friendly ping ...
> 
> I can't speak for Paolo, but I've mentioned before that the majority
> of your messages end up in my spam. That's still the case, in fact
> I just marked maybe 10 of them as not spam.
> 
> You really need to get this issued sorted out, or you will continue
> to have patches ignore because folks may simply not see them.
>
Hi,

Thanks for your notice.

Is it just me or do you see someone else's messages from *huawei.com
end up in spam? I tried to seek help from our IT support, however, they
didn't find anything unusual...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23  1:10         ` yukuai (C)
@ 2022-05-23  1:24           ` Jens Axboe
  2022-05-23  8:18             ` Yu Kuai
  2022-05-23  8:59           ` Jan Kara
  1 sibling, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2022-05-23  1:24 UTC (permalink / raw)
  To: yukuai (C), paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

On 5/22/22 7:10 PM, yukuai (C) wrote:
> ? 2022/05/21 20:21, Jens Axboe ??:
>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>> Hi, Paolo
>>>>>
>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>> since we spotted this problem...
>>>>>
>>>>
>>>> friendly ping ...
>>> friendly ping ...
>>
>> I can't speak for Paolo, but I've mentioned before that the majority
>> of your messages end up in my spam. That's still the case, in fact
>> I just marked maybe 10 of them as not spam.
>>
>> You really need to get this issued sorted out, or you will continue
>> to have patches ignore because folks may simply not see them.
>>
> Hi,
> 
> Thanks for your notice.
> 
> Is it just me or do you see someone else's messages from *huawei.com
> end up in spam? I tried to seek help from our IT support, however, they
> didn't find anything unusual...

Not sure, I think it's just you. It may be the name as well "yukuai (C)"
probably makes gmail think it's not a real name? Or maybe it's the
yukuai3 in the email? Pure speculation on my side.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23  1:24           ` Jens Axboe
@ 2022-05-23  8:18             ` Yu Kuai
  2022-05-23 12:36               ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-05-23  8:18 UTC (permalink / raw)
  To: Jens Axboe, paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

在 2022/05/23 9:24, Jens Axboe 写道:
> On 5/22/22 7:10 PM, yukuai (C) wrote:
>> ? 2022/05/21 20:21, Jens Axboe ??:
>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>> Hi, Paolo
>>>>>>
>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>> since we spotted this problem...
>>>>>>
>>>>>
>>>>> friendly ping ...
>>>> friendly ping ...
>>>
>>> I can't speak for Paolo, but I've mentioned before that the majority
>>> of your messages end up in my spam. That's still the case, in fact
>>> I just marked maybe 10 of them as not spam.
>>>
>>> You really need to get this issued sorted out, or you will continue
>>> to have patches ignore because folks may simply not see them.
>>>
>> Hi,
>>
>> Thanks for your notice.
>>
>> Is it just me or do you see someone else's messages from *huawei.com
>> end up in spam? I tried to seek help from our IT support, however, they
>> didn't find anything unusual...
> 
> Not sure, I think it's just you. It may be the name as well "yukuai (C)"
Hi, Jens

I just change this default name "yukuai (C)" to "Yu Kuai", can you
please have a check if following emails still go to spam?

https://lore.kernel.org/all/20220523082633.2324980-1-yukuai3@huawei.com/
> probably makes gmail think it's not a real name? Or maybe it's the
> yukuai3 in the email? Pure speculation on my side.
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23  1:10         ` yukuai (C)
  2022-05-23  1:24           ` Jens Axboe
@ 2022-05-23  8:59           ` Jan Kara
  2022-05-23 12:36             ` Jens Axboe
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Kara @ 2022-05-23  8:59 UTC (permalink / raw)
  To: yukuai (C)
  Cc: Jens Axboe, paolo.valente, jack, tj, linux-block, cgroups,
	linux-kernel, yi.zhang

On Mon 23-05-22 09:10:38, yukuai (C) wrote:
> 在 2022/05/21 20:21, Jens Axboe 写道:
> > On 5/21/22 1:22 AM, yukuai (C) wrote:
> > > 在 2022/05/14 17:29, yukuai (C) 写道:
> > > > 在 2022/05/05 9:00, yukuai (C) 写道:
> > > > > Hi, Paolo
> > > > > 
> > > > > Can you take a look at this patchset? It has been quite a long time
> > > > > since we spotted this problem...
> > > > > 
> > > > 
> > > > friendly ping ...
> > > friendly ping ...
> > 
> > I can't speak for Paolo, but I've mentioned before that the majority
> > of your messages end up in my spam. That's still the case, in fact
> > I just marked maybe 10 of them as not spam.
> > 
> > You really need to get this issued sorted out, or you will continue
> > to have patches ignore because folks may simply not see them.
> > 
> Hi,
> 
> Thanks for your notice.
> 
> Is it just me or do you see someone else's messages from *huawei.com
> end up in spam? I tried to seek help from our IT support, however, they
> didn't find anything unusual...

So actually I have noticed that a lot of (valid) email from huawei.com (not
just you) ends up in the spam mailbox. For me direct messages usually pass
(likely matching SPF records for originating mail server save the email
from going to spam) but messages going through mailing lists are flagged as
spam because the emails are missing valid DKIM signature but huawei.com
DMARC config says there should be DKIM signature (even direct messages are
missing DKIM so this does not seem as a mailing list configuration issue).
So this seems as some misconfiguration of the mails on huawei.com side
(likely missing DKIM signing of outgoing email).

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23  8:18             ` Yu Kuai
@ 2022-05-23 12:36               ` Jens Axboe
  2022-05-23 12:58                 ` Yu Kuai
  0 siblings, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2022-05-23 12:36 UTC (permalink / raw)
  To: Yu Kuai, paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

On 5/23/22 2:18 AM, Yu Kuai wrote:
> ? 2022/05/23 9:24, Jens Axboe ??:
>> On 5/22/22 7:10 PM, yukuai (C) wrote:
>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>> Hi, Paolo
>>>>>>>
>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>> since we spotted this problem...
>>>>>>>
>>>>>>
>>>>>> friendly ping ...
>>>>> friendly ping ...
>>>>
>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>> of your messages end up in my spam. That's still the case, in fact
>>>> I just marked maybe 10 of them as not spam.
>>>>
>>>> You really need to get this issued sorted out, or you will continue
>>>> to have patches ignore because folks may simply not see them.
>>>>
>>> Hi,
>>>
>>> Thanks for your notice.
>>>
>>> Is it just me or do you see someone else's messages from *huawei.com
>>> end up in spam? I tried to seek help from our IT support, however, they
>>> didn't find anything unusual...
>>
>> Not sure, I think it's just you. It may be the name as well "yukuai (C)"
> Hi, Jens
> 
> I just change this default name "yukuai (C)" to "Yu Kuai", can you
> please have a check if following emails still go to spam?
> 
> https://lore.kernel.org/all/20220523082633.2324980-1-yukuai3@huawei.com/

These did not go into spam, were delivered just fine.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23  8:59           ` Jan Kara
@ 2022-05-23 12:36             ` Jens Axboe
  2022-05-23 15:25               ` Jan Kara
  0 siblings, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2022-05-23 12:36 UTC (permalink / raw)
  To: Jan Kara, yukuai (C)
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang

On 5/23/22 2:59 AM, Jan Kara wrote:
> On Mon 23-05-22 09:10:38, yukuai (C) wrote:
>> ? 2022/05/21 20:21, Jens Axboe ??:
>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>> Hi, Paolo
>>>>>>
>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>> since we spotted this problem...
>>>>>>
>>>>>
>>>>> friendly ping ...
>>>> friendly ping ...
>>>
>>> I can't speak for Paolo, but I've mentioned before that the majority
>>> of your messages end up in my spam. That's still the case, in fact
>>> I just marked maybe 10 of them as not spam.
>>>
>>> You really need to get this issued sorted out, or you will continue
>>> to have patches ignore because folks may simply not see them.
>>>
>> Hi,
>>
>> Thanks for your notice.
>>
>> Is it just me or do you see someone else's messages from *huawei.com
>> end up in spam? I tried to seek help from our IT support, however, they
>> didn't find anything unusual...
> 
> So actually I have noticed that a lot of (valid) email from huawei.com (not
> just you) ends up in the spam mailbox. For me direct messages usually pass
> (likely matching SPF records for originating mail server save the email
> from going to spam) but messages going through mailing lists are flagged as
> spam because the emails are missing valid DKIM signature but huawei.com
> DMARC config says there should be DKIM signature (even direct messages are
> missing DKIM so this does not seem as a mailing list configuration issue).
> So this seems as some misconfiguration of the mails on huawei.com side
> (likely missing DKIM signing of outgoing email).

SPF/DKIM was indeed a problem earlier for yukaui patches, but I don't
see that anymore. Maybe it's still an issue for some emails, from them
or Huawei in general?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23 12:36               ` Jens Axboe
@ 2022-05-23 12:58                 ` Yu Kuai
  2022-05-23 13:29                   ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-05-23 12:58 UTC (permalink / raw)
  To: Jens Axboe, paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

在 2022/05/23 20:36, Jens Axboe 写道:
> On 5/23/22 2:18 AM, Yu Kuai wrote:
>> ? 2022/05/23 9:24, Jens Axboe ??:
>>> On 5/22/22 7:10 PM, yukuai (C) wrote:
>>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>>> Hi, Paolo
>>>>>>>>
>>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>>> since we spotted this problem...
>>>>>>>>
>>>>>>>
>>>>>>> friendly ping ...
>>>>>> friendly ping ...
>>>>>
>>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>>> of your messages end up in my spam. That's still the case, in fact
>>>>> I just marked maybe 10 of them as not spam.
>>>>>
>>>>> You really need to get this issued sorted out, or you will continue
>>>>> to have patches ignore because folks may simply not see them.
>>>>>
>>>> Hi,
>>>>
>>>> Thanks for your notice.
>>>>
>>>> Is it just me or do you see someone else's messages from *huawei.com
>>>> end up in spam? I tried to seek help from our IT support, however, they
>>>> didn't find anything unusual...
>>>
>>> Not sure, I think it's just you. It may be the name as well "yukuai (C)"
>> Hi, Jens
>>
>> I just change this default name "yukuai (C)" to "Yu Kuai", can you
>> please have a check if following emails still go to spam?
>>
>> https://lore.kernel.org/all/20220523082633.2324980-1-yukuai3@huawei.com/
> 
> These did not go into spam, were delivered just fine.
> 
Cheers for solving this, I'll resend this patchset just in case they are
in spam for Paolo...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23 12:58                 ` Yu Kuai
@ 2022-05-23 13:29                   ` Jens Axboe
  0 siblings, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2022-05-23 13:29 UTC (permalink / raw)
  To: Yu Kuai, paolo.valente
  Cc: jack, tj, linux-block, cgroups, linux-kernel, yi.zhang

On 5/23/22 6:58 AM, Yu Kuai wrote:
> ? 2022/05/23 20:36, Jens Axboe ??:
>> On 5/23/22 2:18 AM, Yu Kuai wrote:
>>> ? 2022/05/23 9:24, Jens Axboe ??:
>>>> On 5/22/22 7:10 PM, yukuai (C) wrote:
>>>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>>>> Hi, Paolo
>>>>>>>>>
>>>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>>>> since we spotted this problem...
>>>>>>>>>
>>>>>>>>
>>>>>>>> friendly ping ...
>>>>>>> friendly ping ...
>>>>>>
>>>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>>>> of your messages end up in my spam. That's still the case, in fact
>>>>>> I just marked maybe 10 of them as not spam.
>>>>>>
>>>>>> You really need to get this issued sorted out, or you will continue
>>>>>> to have patches ignore because folks may simply not see them.
>>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for your notice.
>>>>>
>>>>> Is it just me or do you see someone else's messages from *huawei.com
>>>>> end up in spam? I tried to seek help from our IT support, however, they
>>>>> didn't find anything unusual...
>>>>
>>>> Not sure, I think it's just you. It may be the name as well "yukuai (C)"
>>> Hi, Jens
>>>
>>> I just change this default name "yukuai (C)" to "Yu Kuai", can you
>>> please have a check if following emails still go to spam?
>>>
>>> https://lore.kernel.org/all/20220523082633.2324980-1-yukuai3@huawei.com/
>>
>> These did not go into spam, were delivered just fine.
>>
> Cheers for solving this, I'll resend this patchset just in case they are
> in spam for Paolo...

Let's hope it's solved, you never know with gmail... But that series did
go through fine as well, fwiw.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23 12:36             ` Jens Axboe
@ 2022-05-23 15:25               ` Jan Kara
  2022-05-24  1:13                 ` Yu Kuai
  2022-06-07  3:10                 ` Yu Kuai
  0 siblings, 2 replies; 26+ messages in thread
From: Jan Kara @ 2022-05-23 15:25 UTC (permalink / raw)
  To: yukuai (C)
  Cc: Jan Kara, yukuai (C),
	paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang,
	Jens Axboe

On Mon 23-05-22 06:36:58, Jens Axboe wrote:
> On 5/23/22 2:59 AM, Jan Kara wrote:
> > On Mon 23-05-22 09:10:38, yukuai (C) wrote:
> >> ? 2022/05/21 20:21, Jens Axboe ??:
> >>> On 5/21/22 1:22 AM, yukuai (C) wrote:
> >>>> ? 2022/05/14 17:29, yukuai (C) ??:
> >>>>> ? 2022/05/05 9:00, yukuai (C) ??:
> >>>>>> Hi, Paolo
> >>>>>>
> >>>>>> Can you take a look at this patchset? It has been quite a long time
> >>>>>> since we spotted this problem...
> >>>>>>
> >>>>>
> >>>>> friendly ping ...
> >>>> friendly ping ...
> >>>
> >>> I can't speak for Paolo, but I've mentioned before that the majority
> >>> of your messages end up in my spam. That's still the case, in fact
> >>> I just marked maybe 10 of them as not spam.
> >>>
> >>> You really need to get this issued sorted out, or you will continue
> >>> to have patches ignore because folks may simply not see them.
> >>>
> >> Hi,
> >>
> >> Thanks for your notice.
> >>
> >> Is it just me or do you see someone else's messages from *huawei.com
> >> end up in spam? I tried to seek help from our IT support, however, they
> >> didn't find anything unusual...
> > 
> > So actually I have noticed that a lot of (valid) email from huawei.com (not
> > just you) ends up in the spam mailbox. For me direct messages usually pass
> > (likely matching SPF records for originating mail server save the email
> > from going to spam) but messages going through mailing lists are flagged as
> > spam because the emails are missing valid DKIM signature but huawei.com
> > DMARC config says there should be DKIM signature (even direct messages are
> > missing DKIM so this does not seem as a mailing list configuration issue).
> > So this seems as some misconfiguration of the mails on huawei.com side
> > (likely missing DKIM signing of outgoing email).
> 
> SPF/DKIM was indeed a problem earlier for yukaui patches, but I don't
> see that anymore. Maybe it's still an issue for some emails, from them
> or Huawei in general?

Hum, for me all emails from Huawei I've received even today fail the DKIM
check. After some more digging there is interesting inconsistency in DMARC
configuration for huawei.com domain. There is DMARC record for huawei.com
like:

huawei.com.		600	IN	TXT	"v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"

which means no DKIM is required but _dmarc.huawei.com has:

_dmarc.huawei.com.	600	IN	TXT	"v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"

which says that DKIM is required. I guess this inconsistency may be the
reason why there are problems with DKIM validation for senders from
huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
this? Either make sure huawei.com emails get properly signed with DKIM or
remove the 'quarantine' record from _dmarc.huawei.com. Thanks!

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23 15:25               ` Jan Kara
@ 2022-05-24  1:13                 ` Yu Kuai
  2022-06-01  6:16                   ` Jens Axboe
  2022-06-07  3:10                 ` Yu Kuai
  1 sibling, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-05-24  1:13 UTC (permalink / raw)
  To: Jan Kara
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang,
	Jens Axboe

在 2022/05/23 23:25, Jan Kara 写道:
> On Mon 23-05-22 06:36:58, Jens Axboe wrote:
>> On 5/23/22 2:59 AM, Jan Kara wrote:
>>> On Mon 23-05-22 09:10:38, yukuai (C) wrote:
>>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>>> Hi, Paolo
>>>>>>>>
>>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>>> since we spotted this problem...
>>>>>>>>
>>>>>>>
>>>>>>> friendly ping ...
>>>>>> friendly ping ...
>>>>>
>>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>>> of your messages end up in my spam. That's still the case, in fact
>>>>> I just marked maybe 10 of them as not spam.
>>>>>
>>>>> You really need to get this issued sorted out, or you will continue
>>>>> to have patches ignore because folks may simply not see them.
>>>>>
>>>> Hi,
>>>>
>>>> Thanks for your notice.
>>>>
>>>> Is it just me or do you see someone else's messages from *huawei.com
>>>> end up in spam? I tried to seek help from our IT support, however, they
>>>> didn't find anything unusual...
>>>
>>> So actually I have noticed that a lot of (valid) email from huawei.com (not
>>> just you) ends up in the spam mailbox. For me direct messages usually pass
>>> (likely matching SPF records for originating mail server save the email
>>> from going to spam) but messages going through mailing lists are flagged as
>>> spam because the emails are missing valid DKIM signature but huawei.com
>>> DMARC config says there should be DKIM signature (even direct messages are
>>> missing DKIM so this does not seem as a mailing list configuration issue).
>>> So this seems as some misconfiguration of the mails on huawei.com side
>>> (likely missing DKIM signing of outgoing email).
>>
>> SPF/DKIM was indeed a problem earlier for yukaui patches, but I don't
>> see that anymore. Maybe it's still an issue for some emails, from them
>> or Huawei in general?
> 
> Hum, for me all emails from Huawei I've received even today fail the DKIM
> check. After some more digging there is interesting inconsistency in DMARC
> configuration for huawei.com domain. There is DMARC record for huawei.com
> like:
> 
> huawei.com.		600	IN	TXT	"v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
> 
> which means no DKIM is required but _dmarc.huawei.com has:
> 
> _dmarc.huawei.com.	600	IN	TXT	"v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
> 
> which says that DKIM is required. I guess this inconsistency may be the
> reason why there are problems with DKIM validation for senders from
> huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
> this? Either make sure huawei.com emails get properly signed with DKIM or
> remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
Of course, I'll try to contact our IT support.

Thanks,
Kuai
> 
> 								Honza
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-24  1:13                 ` Yu Kuai
@ 2022-06-01  6:16                   ` Jens Axboe
  2022-06-01  7:19                     ` Yu Kuai
  0 siblings, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2022-06-01  6:16 UTC (permalink / raw)
  To: Yu Kuai, Jan Kara
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang

On 5/23/22 7:13 PM, Yu Kuai wrote:
> ? 2022/05/23 23:25, Jan Kara ??:
>> On Mon 23-05-22 06:36:58, Jens Axboe wrote:
>>> On 5/23/22 2:59 AM, Jan Kara wrote:
>>>> On Mon 23-05-22 09:10:38, yukuai (C) wrote:
>>>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>>>> Hi, Paolo
>>>>>>>>>
>>>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>>>> since we spotted this problem...
>>>>>>>>>
>>>>>>>>
>>>>>>>> friendly ping ...
>>>>>>> friendly ping ...
>>>>>>
>>>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>>>> of your messages end up in my spam. That's still the case, in fact
>>>>>> I just marked maybe 10 of them as not spam.
>>>>>>
>>>>>> You really need to get this issued sorted out, or you will continue
>>>>>> to have patches ignore because folks may simply not see them.
>>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for your notice.
>>>>>
>>>>> Is it just me or do you see someone else's messages from *huawei.com
>>>>> end up in spam? I tried to seek help from our IT support, however, they
>>>>> didn't find anything unusual...
>>>>
>>>> So actually I have noticed that a lot of (valid) email from huawei.com (not
>>>> just you) ends up in the spam mailbox. For me direct messages usually pass
>>>> (likely matching SPF records for originating mail server save the email
>>>> from going to spam) but messages going through mailing lists are flagged as
>>>> spam because the emails are missing valid DKIM signature but huawei.com
>>>> DMARC config says there should be DKIM signature (even direct messages are
>>>> missing DKIM so this does not seem as a mailing list configuration issue).
>>>> So this seems as some misconfiguration of the mails on huawei.com side
>>>> (likely missing DKIM signing of outgoing email).
>>>
>>> SPF/DKIM was indeed a problem earlier for yukaui patches, but I don't
>>> see that anymore. Maybe it's still an issue for some emails, from them
>>> or Huawei in general?
>>
>> Hum, for me all emails from Huawei I've received even today fail the DKIM
>> check. After some more digging there is interesting inconsistency in DMARC
>> configuration for huawei.com domain. There is DMARC record for huawei.com
>> like:
>>
>> huawei.com.        600    IN    TXT    "v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
>>
>> which means no DKIM is required but _dmarc.huawei.com has:
>>
>> _dmarc.huawei.com.    600    IN    TXT    "v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
>>
>> which says that DKIM is required. I guess this inconsistency may be the
>> reason why there are problems with DKIM validation for senders from
>> huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
>> this? Either make sure huawei.com emails get properly signed with DKIM or
>> remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
> Of course, I'll try to contact our IT support.

I second that, pretty much every email has been going into spam since, I
guess you just had a few lucky ones. Looks like Jan is right, it's a
server side configuration error that's causing this, and it's still
happening

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-06-01  6:16                   ` Jens Axboe
@ 2022-06-01  7:19                     ` Yu Kuai
  0 siblings, 0 replies; 26+ messages in thread
From: Yu Kuai @ 2022-06-01  7:19 UTC (permalink / raw)
  To: Jens Axboe, Jan Kara
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang

在 2022/06/01 14:16, Jens Axboe 写道:
> On 5/23/22 7:13 PM, Yu Kuai wrote:
>> ? 2022/05/23 23:25, Jan Kara ??:
>>> On Mon 23-05-22 06:36:58, Jens Axboe wrote:
>>>> On 5/23/22 2:59 AM, Jan Kara wrote:
>>>>> On Mon 23-05-22 09:10:38, yukuai (C) wrote:
>>>>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>>>>> Hi, Paolo
>>>>>>>>>>
>>>>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>>>>> since we spotted this problem...
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> friendly ping ...
>>>>>>>> friendly ping ...
>>>>>>>
>>>>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>>>>> of your messages end up in my spam. That's still the case, in fact
>>>>>>> I just marked maybe 10 of them as not spam.
>>>>>>>
>>>>>>> You really need to get this issued sorted out, or you will continue
>>>>>>> to have patches ignore because folks may simply not see them.
>>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thanks for your notice.
>>>>>>
>>>>>> Is it just me or do you see someone else's messages from *huawei.com
>>>>>> end up in spam? I tried to seek help from our IT support, however, they
>>>>>> didn't find anything unusual...
>>>>>
>>>>> So actually I have noticed that a lot of (valid) email from huawei.com (not
>>>>> just you) ends up in the spam mailbox. For me direct messages usually pass
>>>>> (likely matching SPF records for originating mail server save the email
>>>>> from going to spam) but messages going through mailing lists are flagged as
>>>>> spam because the emails are missing valid DKIM signature but huawei.com
>>>>> DMARC config says there should be DKIM signature (even direct messages are
>>>>> missing DKIM so this does not seem as a mailing list configuration issue).
>>>>> So this seems as some misconfiguration of the mails on huawei.com side
>>>>> (likely missing DKIM signing of outgoing email).
>>>>
>>>> SPF/DKIM was indeed a problem earlier for yukaui patches, but I don't
>>>> see that anymore. Maybe it's still an issue for some emails, from them
>>>> or Huawei in general?
>>>
>>> Hum, for me all emails from Huawei I've received even today fail the DKIM
>>> check. After some more digging there is interesting inconsistency in DMARC
>>> configuration for huawei.com domain. There is DMARC record for huawei.com
>>> like:
>>>
>>> huawei.com.        600    IN    TXT    "v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
>>>
>>> which means no DKIM is required but _dmarc.huawei.com has:
>>>
>>> _dmarc.huawei.com.    600    IN    TXT    "v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
>>>
>>> which says that DKIM is required. I guess this inconsistency may be the
>>> reason why there are problems with DKIM validation for senders from
>>> huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
>>> this? Either make sure huawei.com emails get properly signed with DKIM or
>>> remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
>> Of course, I'll try to contact our IT support.
> 
> I second that, pretty much every email has been going into spam since, I
> guess you just had a few lucky ones. Looks like Jan is right, it's a
> server side configuration error that's causing this, and it's still
> happening
> 

Thanks for your response 😄

I aready contack our IT support, and hopefully this can be solved
soon...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-05-23 15:25               ` Jan Kara
  2022-05-24  1:13                 ` Yu Kuai
@ 2022-06-07  3:10                 ` Yu Kuai
  2022-06-07  9:54                   ` Jan Kara
  1 sibling, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-06-07  3:10 UTC (permalink / raw)
  To: Jan Kara
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang,
	Jens Axboe



在 2022/05/23 23:25, Jan Kara 写道:
> On Mon 23-05-22 06:36:58, Jens Axboe wrote:
>> On 5/23/22 2:59 AM, Jan Kara wrote:
>>> On Mon 23-05-22 09:10:38, yukuai (C) wrote:
>>>> ? 2022/05/21 20:21, Jens Axboe ??:
>>>>> On 5/21/22 1:22 AM, yukuai (C) wrote:
>>>>>> ? 2022/05/14 17:29, yukuai (C) ??:
>>>>>>> ? 2022/05/05 9:00, yukuai (C) ??:
>>>>>>>> Hi, Paolo
>>>>>>>>
>>>>>>>> Can you take a look at this patchset? It has been quite a long time
>>>>>>>> since we spotted this problem...
>>>>>>>>
>>>>>>>
>>>>>>> friendly ping ...
>>>>>> friendly ping ...
>>>>>
>>>>> I can't speak for Paolo, but I've mentioned before that the majority
>>>>> of your messages end up in my spam. That's still the case, in fact
>>>>> I just marked maybe 10 of them as not spam.
>>>>>
>>>>> You really need to get this issued sorted out, or you will continue
>>>>> to have patches ignore because folks may simply not see them.
>>>>>
>>>> Hi,
>>>>
>>>> Thanks for your notice.
>>>>
>>>> Is it just me or do you see someone else's messages from *huawei.com
>>>> end up in spam? I tried to seek help from our IT support, however, they
>>>> didn't find anything unusual...
>>>
>>> So actually I have noticed that a lot of (valid) email from huawei.com (not
>>> just you) ends up in the spam mailbox. For me direct messages usually pass
>>> (likely matching SPF records for originating mail server save the email
>>> from going to spam) but messages going through mailing lists are flagged as
>>> spam because the emails are missing valid DKIM signature but huawei.com
>>> DMARC config says there should be DKIM signature (even direct messages are
>>> missing DKIM so this does not seem as a mailing list configuration issue).
>>> So this seems as some misconfiguration of the mails on huawei.com side
>>> (likely missing DKIM signing of outgoing email).
>>
>> SPF/DKIM was indeed a problem earlier for yukaui patches, but I don't
>> see that anymore. Maybe it's still an issue for some emails, from them
>> or Huawei in general?
> 
> Hum, for me all emails from Huawei I've received even today fail the DKIM
> check. After some more digging there is interesting inconsistency in DMARC
> configuration for huawei.com domain. There is DMARC record for huawei.com
> like:
> 
> huawei.com.		600	IN	TXT	"v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
> 
> which means no DKIM is required but _dmarc.huawei.com has:
> 
> _dmarc.huawei.com.	600	IN	TXT	"v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
> 
> which says that DKIM is required. I guess this inconsistency may be the
> reason why there are problems with DKIM validation for senders from
> huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
> this? Either make sure huawei.com emails get properly signed with DKIM or
> remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
> 
> 								Honza
> 
Hi, Jan and Jens

I just got response from our IT support:

'fo' is not set in our dmarc configuration(default is 0), which means
SPF and DKIM verify both failed so that emails will end up in spam.

It right that DKIM verify is failed because there is no signed key,
however, our IT support are curious how SPF verify faild.

Can you guys please take a look at ip address of sender? So our IT
support can take a look if they miss it from SPF records.

Thanks,
Kuai

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-06-07  3:10                 ` Yu Kuai
@ 2022-06-07  9:54                   ` Jan Kara
  2022-06-07 11:51                     ` Yu Kuai
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Kara @ 2022-06-07  9:54 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Jan Kara, paolo.valente, tj, linux-block, cgroups, linux-kernel,
	yi.zhang, Jens Axboe

On Tue 07-06-22 11:10:27, Yu Kuai wrote:
> 在 2022/05/23 23:25, Jan Kara 写道:
> > Hum, for me all emails from Huawei I've received even today fail the DKIM
> > check. After some more digging there is interesting inconsistency in DMARC
> > configuration for huawei.com domain. There is DMARC record for huawei.com
> > like:
> > 
> > huawei.com.		600	IN	TXT	"v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
> > 
> > which means no DKIM is required but _dmarc.huawei.com has:
> > 
> > _dmarc.huawei.com.	600	IN	TXT	"v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
> > 
> > which says that DKIM is required. I guess this inconsistency may be the
> > reason why there are problems with DKIM validation for senders from
> > huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
> > this? Either make sure huawei.com emails get properly signed with DKIM or
> > remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
> > 
> > 								Honza
> > 
> Hi, Jan and Jens
> 
> I just got response from our IT support:
> 
> 'fo' is not set in our dmarc configuration(default is 0), which means
> SPF and DKIM verify both failed so that emails will end up in spam.
> 
> It right that DKIM verify is failed because there is no signed key,
> however, our IT support are curious how SPF verify faild.
> 
> Can you guys please take a look at ip address of sender? So our IT
> support can take a look if they miss it from SPF records.

So SPF is what makes me receive direct emails from you. For example on this
email I can see:

Received: from frasgout.his.huawei.com (frasgout.his.huawei.com
        [185.176.79.56])
        (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128
        bits))
        (No client certificate requested)
        by smtp-in2.suse.de (Postfix) with ESMTPS id 4LHFjN2L0dzZfj
        for <jack@suse.cz>; Tue,  7 Jun 2022 03:10:32 +0000 (UTC)
...
Authentication-Results: smtp-in2.suse.de;
        dkim=none;
        dmarc=pass (policy=quarantine) header.from=huawei.com;
        spf=pass (smtp-in2.suse.de: domain of yukuai3@huawei.com designates
        185.176.79.56 as permitted sender) smtp.mailfrom=yukuai3@huawei.com

So indeed frasgout.his.huawei.com is correct outgoing server which makes
smtp-in2.suse.de believe the email despite missing DKIM signature. But the
problem starts when you send email to a mailing list. Let me take for
example your email from June 2 with Message-ID
<20220602082129.2805890-1-yukuai3@huawei.com>, subject "[PATCH -next]
mm/filemap: fix that first page is not mark accessed in filemap_read()".
There the mailing list server forwards the email so we have:

Received: from smtp-in2.suse.de ([192.168.254.78])
        (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
        by dovecot-director2.suse.de with LMTPS
        id 8MC5NfVvmGIPLwAApTUePA
        (envelope-from <linux-fsdevel-owner@vger.kernel.org>)
        for <jack@imap.suse.de>; Thu, 02 Jun 2022 08:08:21 +0000
Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20])
        by smtp-in2.suse.de (Postfix) with ESMTP id 4LDJYK5bf0zZg5
        for <jack@suse.cz>; Thu,  2 Jun 2022 08:08:21 +0000 (UTC)
Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
        id S232063AbiFBIIM (ORCPT <rfc822;jack@suse.cz>);
        Thu, 2 Jun 2022 04:08:12 -0400
Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO
        lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by
        vger.kernel.org
        with ESMTP id S232062AbiFBIIL (ORCPT
        <rfc822;linux-fsdevel@vger.kernel.org>);
        Thu, 2 Jun 2022 04:08:11 -0400
Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188])
        by lindbergh.monkeyblade.net (Postfix) with ESMTPS id
        75DDB25FE;
        Thu,  2 Jun 2022 01:08:08 -0700 (PDT)

and thus smtp-in2.suse.de complains:

Authentication-Results: smtp-in2.suse.de;
        dkim=none;
        dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM"
        header.from=huawei.com (policy=quarantine);
        spf=pass (smtp-in2.suse.de: domain of
        linux-fsdevel-owner@vger.kernel.org designates 2620:137:e000::1:20 as
        permitted sender) smtp.mailfrom=linux-fsdevel-owner@vger.kernel.org

Because now we've got email with "From" header from huawei.com domain from
a vger mail server which was forwarding it. So SPF has no chance to match
(in fact SPF did pass for the Return-Path header which points to
vger.kernel.org but DMARC defines that if "From" and "Return-Path" do not
match, additional validation is needed - this is the "SPF not aligned
(relaxed)" message above). And missing DKIM (the additional validation
method) sends the email to spam.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-06-07  9:54                   ` Jan Kara
@ 2022-06-07 11:51                     ` Yu Kuai
  2022-06-07 13:06                       ` Yu Kuai
  0 siblings, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-06-07 11:51 UTC (permalink / raw)
  To: Jan Kara
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang,
	Jens Axboe

在 2022/06/07 17:54, Jan Kara 写道:
> On Tue 07-06-22 11:10:27, Yu Kuai wrote:
>> 在 2022/05/23 23:25, Jan Kara 写道:
>>> Hum, for me all emails from Huawei I've received even today fail the DKIM
>>> check. After some more digging there is interesting inconsistency in DMARC
>>> configuration for huawei.com domain. There is DMARC record for huawei.com
>>> like:
>>>
>>> huawei.com.		600	IN	TXT	"v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
>>>
>>> which means no DKIM is required but _dmarc.huawei.com has:
>>>
>>> _dmarc.huawei.com.	600	IN	TXT	"v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
>>>
>>> which says that DKIM is required. I guess this inconsistency may be the
>>> reason why there are problems with DKIM validation for senders from
>>> huawei.com. Yu Kuai, can you perhaps take this to your IT support to fix
>>> this? Either make sure huawei.com emails get properly signed with DKIM or
>>> remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
>>>
>>> 								Honza
>>>
>> Hi, Jan and Jens
>>
>> I just got response from our IT support:
>>
>> 'fo' is not set in our dmarc configuration(default is 0), which means
>> SPF and DKIM verify both failed so that emails will end up in spam.
>>
>> It right that DKIM verify is failed because there is no signed key,
>> however, our IT support are curious how SPF verify faild.
>>
>> Can you guys please take a look at ip address of sender? So our IT
>> support can take a look if they miss it from SPF records.
> 
> So SPF is what makes me receive direct emails from you. For example on this
> email I can see:
> 
> Received: from frasgout.his.huawei.com (frasgout.his.huawei.com
>          [185.176.79.56])
>          (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128
>          bits))
>          (No client certificate requested)
>          by smtp-in2.suse.de (Postfix) with ESMTPS id 4LHFjN2L0dzZfj
>          for <jack@suse.cz>; Tue,  7 Jun 2022 03:10:32 +0000 (UTC)
> ...
> Authentication-Results: smtp-in2.suse.de;
>          dkim=none;
>          dmarc=pass (policy=quarantine) header.from=huawei.com;
>          spf=pass (smtp-in2.suse.de: domain of yukuai3@huawei.com designates
>          185.176.79.56 as permitted sender) smtp.mailfrom=yukuai3@huawei.com
> 
> So indeed frasgout.his.huawei.com is correct outgoing server which makes
> smtp-in2.suse.de believe the email despite missing DKIM signature. But the
> problem starts when you send email to a mailing list. Let me take for
> example your email from June 2 with Message-ID
> <20220602082129.2805890-1-yukuai3@huawei.com>, subject "[PATCH -next]
> mm/filemap: fix that first page is not mark accessed in filemap_read()".
> There the mailing list server forwards the email so we have:
> 
> Received: from smtp-in2.suse.de ([192.168.254.78])
>          (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
>          by dovecot-director2.suse.de with LMTPS
>          id 8MC5NfVvmGIPLwAApTUePA
>          (envelope-from <linux-fsdevel-owner@vger.kernel.org>)
>          for <jack@imap.suse.de>; Thu, 02 Jun 2022 08:08:21 +0000
> Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20])
>          by smtp-in2.suse.de (Postfix) with ESMTP id 4LDJYK5bf0zZg5
>          for <jack@suse.cz>; Thu,  2 Jun 2022 08:08:21 +0000 (UTC)
> Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
>          id S232063AbiFBIIM (ORCPT <rfc822;jack@suse.cz>);
>          Thu, 2 Jun 2022 04:08:12 -0400
> Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO
>          lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by
>          vger.kernel.org
>          with ESMTP id S232062AbiFBIIL (ORCPT
>          <rfc822;linux-fsdevel@vger.kernel.org>);
>          Thu, 2 Jun 2022 04:08:11 -0400
> Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188])
>          by lindbergh.monkeyblade.net (Postfix) with ESMTPS id
>          75DDB25FE;
>          Thu,  2 Jun 2022 01:08:08 -0700 (PDT)
> 
> and thus smtp-in2.suse.de complains:
> 
> Authentication-Results: smtp-in2.suse.de;
>          dkim=none;
>          dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM"
>          header.from=huawei.com (policy=quarantine);
>          spf=pass (smtp-in2.suse.de: domain of
>          linux-fsdevel-owner@vger.kernel.org designates 2620:137:e000::1:20 as
>          permitted sender) smtp.mailfrom=linux-fsdevel-owner@vger.kernel.org
> 
> Because now we've got email with "From" header from huawei.com domain from
> a vger mail server which was forwarding it. So SPF has no chance to match
> (in fact SPF did pass for the Return-Path header which points to
> vger.kernel.org but DMARC defines that if "From" and "Return-Path" do not
> match, additional validation is needed - this is the "SPF not aligned
> (relaxed)" message above). And missing DKIM (the additional validation
> method) sends the email to spam.

Thanks a lot for your analysis, afaics, in order to fix the
problem, either your mail server change the configuration to set
alignment mode to "relaxed" instead of "strict", or our mail server
add correct DKIM signature for emails.

I'll contact with our IT support and try to add DKIM signature.

Thanks,
Kuai

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-06-07 11:51                     ` Yu Kuai
@ 2022-06-07 13:06                       ` Yu Kuai
  2022-06-07 20:30                         ` Jan Kara
  0 siblings, 1 reply; 26+ messages in thread
From: Yu Kuai @ 2022-06-07 13:06 UTC (permalink / raw)
  To: Jan Kara
  Cc: paolo.valente, tj, linux-block, cgroups, linux-kernel, yi.zhang,
	Jens Axboe

在 2022/06/07 19:51, Yu Kuai 写道:
> 在 2022/06/07 17:54, Jan Kara 写道:
>> On Tue 07-06-22 11:10:27, Yu Kuai wrote:
>>> 在 2022/05/23 23:25, Jan Kara 写道:
>>>> Hum, for me all emails from Huawei I've received even today fail the 
>>>> DKIM
>>>> check. After some more digging there is interesting inconsistency in 
>>>> DMARC
>>>> configuration for huawei.com domain. There is DMARC record for 
>>>> huawei.com
>>>> like:
>>>>
>>>> huawei.com.        600    IN    TXT    
>>>> "v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
>>>>
>>>> which means no DKIM is required but _dmarc.huawei.com has:
>>>>
>>>> _dmarc.huawei.com.    600    IN    TXT    
>>>> "v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com" 
>>>>
>>>>
>>>> which says that DKIM is required. I guess this inconsistency may be the
>>>> reason why there are problems with DKIM validation for senders from
>>>> huawei.com. Yu Kuai, can you perhaps take this to your IT support to 
>>>> fix
>>>> this? Either make sure huawei.com emails get properly signed with 
>>>> DKIM or
>>>> remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
>>>>
>>>>                                 Honza
>>>>
>>> Hi, Jan and Jens
>>>
>>> I just got response from our IT support:
>>>
>>> 'fo' is not set in our dmarc configuration(default is 0), which means
>>> SPF and DKIM verify both failed so that emails will end up in spam.
>>>
>>> It right that DKIM verify is failed because there is no signed key,
>>> however, our IT support are curious how SPF verify faild.
>>>
>>> Can you guys please take a look at ip address of sender? So our IT
>>> support can take a look if they miss it from SPF records.
>>
>> So SPF is what makes me receive direct emails from you. For example on 
>> this
>> email I can see:
>>
>> Received: from frasgout.his.huawei.com (frasgout.his.huawei.com
>>          [185.176.79.56])
>>          (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 
>> (128/128
>>          bits))
>>          (No client certificate requested)
>>          by smtp-in2.suse.de (Postfix) with ESMTPS id 4LHFjN2L0dzZfj
>>          for <jack@suse.cz>; Tue,  7 Jun 2022 03:10:32 +0000 (UTC)
>> ...
>> Authentication-Results: smtp-in2.suse.de;
>>          dkim=none;
>>          dmarc=pass (policy=quarantine) header.from=huawei.com;
>>          spf=pass (smtp-in2.suse.de: domain of yukuai3@huawei.com 
>> designates
>>          185.176.79.56 as permitted sender) 
>> smtp.mailfrom=yukuai3@huawei.com
>>
>> So indeed frasgout.his.huawei.com is correct outgoing server which makes
>> smtp-in2.suse.de believe the email despite missing DKIM signature. But 
>> the
>> problem starts when you send email to a mailing list. Let me take for
>> example your email from June 2 with Message-ID
>> <20220602082129.2805890-1-yukuai3@huawei.com>, subject "[PATCH -next]
>> mm/filemap: fix that first page is not mark accessed in filemap_read()".
>> There the mailing list server forwards the email so we have:
>>
>> Received: from smtp-in2.suse.de ([192.168.254.78])
>>          (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 
>> bits))
>>          by dovecot-director2.suse.de with LMTPS
>>          id 8MC5NfVvmGIPLwAApTUePA
>>          (envelope-from <linux-fsdevel-owner@vger.kernel.org>)
>>          for <jack@imap.suse.de>; Thu, 02 Jun 2022 08:08:21 +0000
>> Received: from out1.vger.email (out1.vger.email 
>> [IPv6:2620:137:e000::1:20])
>>          by smtp-in2.suse.de (Postfix) with ESMTP id 4LDJYK5bf0zZg5
>>          for <jack@suse.cz>; Thu,  2 Jun 2022 08:08:21 +0000 (UTC)
>> Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
>>          id S232063AbiFBIIM (ORCPT <rfc822;jack@suse.cz>);
>>          Thu, 2 Jun 2022 04:08:12 -0400
>> Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO
>>          lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by
>>          vger.kernel.org
>>          with ESMTP id S232062AbiFBIIL (ORCPT
>>          <rfc822;linux-fsdevel@vger.kernel.org>);
>>          Thu, 2 Jun 2022 04:08:11 -0400
>> Received: from szxga02-in.huawei.com (szxga02-in.huawei.com 
>> [45.249.212.188])
>>          by lindbergh.monkeyblade.net (Postfix) with ESMTPS id
>>          75DDB25FE;
>>          Thu,  2 Jun 2022 01:08:08 -0700 (PDT)
>>
>> and thus smtp-in2.suse.de complains:
>>
>> Authentication-Results: smtp-in2.suse.de;
>>          dkim=none;
>>          dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM"
>>          header.from=huawei.com (policy=quarantine);
>>          spf=pass (smtp-in2.suse.de: domain of
>>          linux-fsdevel-owner@vger.kernel.org designates 
>> 2620:137:e000::1:20 as
>>          permitted sender) 
>> smtp.mailfrom=linux-fsdevel-owner@vger.kernel.org
>>
>> Because now we've got email with "From" header from huawei.com domain 
>> from
>> a vger mail server which was forwarding it. So SPF has no chance to match
>> (in fact SPF did pass for the Return-Path header which points to
>> vger.kernel.org but DMARC defines that if "From" and "Return-Path" do not
>> match, additional validation is needed - this is the "SPF not aligned
>> (relaxed)" message above). And missing DKIM (the additional validation
>> method) sends the email to spam.
> 
> Thanks a lot for your analysis, afaics, in order to fix the
> problem, either your mail server change the configuration to set
> alignment mode to "relaxed" instead of "strict", or our mail server
> add correct DKIM signature for emails.
> 
> I'll contact with our IT support and try to add DKIM signature.
> 
> Thanks,
> Kuai

Hi, Jan

Our IT support is worried that add DKIM signature will degrade
performance, may I ask that how is your mail server configuation? policy
is quarantine or none, and dkim signature is supportted or not.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion
  2022-06-07 13:06                       ` Yu Kuai
@ 2022-06-07 20:30                         ` Jan Kara
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Kara @ 2022-06-07 20:30 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Jan Kara, paolo.valente, tj, linux-block, cgroups, linux-kernel,
	yi.zhang, Jens Axboe

On Tue 07-06-22 21:06:55, Yu Kuai wrote:
> 在 2022/06/07 19:51, Yu Kuai 写道:
> > 在 2022/06/07 17:54, Jan Kara 写道:
> > > On Tue 07-06-22 11:10:27, Yu Kuai wrote:
> > > > 在 2022/05/23 23:25, Jan Kara 写道:
> > > > > Hum, for me all emails from Huawei I've received even today
> > > > > fail the DKIM
> > > > > check. After some more digging there is interesting
> > > > > inconsistency in DMARC
> > > > > configuration for huawei.com domain. There is DMARC record
> > > > > for huawei.com
> > > > > like:
> > > > > 
> > > > > huawei.com.        600    IN    TXT
> > > > > "v=DMARC1;p=none;rua=mailto:dmarc@edm.huawei.com"
> > > > > 
> > > > > which means no DKIM is required but _dmarc.huawei.com has:
> > > > > 
> > > > > _dmarc.huawei.com.    600    IN    TXT    "v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"
> > > > > 
> > > > > 
> > > > > which says that DKIM is required. I guess this inconsistency may be the
> > > > > reason why there are problems with DKIM validation for senders from
> > > > > huawei.com. Yu Kuai, can you perhaps take this to your IT
> > > > > support to fix
> > > > > this? Either make sure huawei.com emails get properly signed
> > > > > with DKIM or
> > > > > remove the 'quarantine' record from _dmarc.huawei.com. Thanks!
> > > > > 
> > > > >                                 Honza
> > > > > 
> > > > Hi, Jan and Jens
> > > > 
> > > > I just got response from our IT support:
> > > > 
> > > > 'fo' is not set in our dmarc configuration(default is 0), which means
> > > > SPF and DKIM verify both failed so that emails will end up in spam.
> > > > 
> > > > It right that DKIM verify is failed because there is no signed key,
> > > > however, our IT support are curious how SPF verify faild.
> > > > 
> > > > Can you guys please take a look at ip address of sender? So our IT
> > > > support can take a look if they miss it from SPF records.
> > > 
> > > So SPF is what makes me receive direct emails from you. For example
> > > on this
> > > email I can see:
> > > 
> > > Received: from frasgout.his.huawei.com (frasgout.his.huawei.com
> > >          [185.176.79.56])
> > >          (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256
> > > (128/128
> > >          bits))
> > >          (No client certificate requested)
> > >          by smtp-in2.suse.de (Postfix) with ESMTPS id 4LHFjN2L0dzZfj
> > >          for <jack@suse.cz>; Tue,  7 Jun 2022 03:10:32 +0000 (UTC)
> > > ...
> > > Authentication-Results: smtp-in2.suse.de;
> > >          dkim=none;
> > >          dmarc=pass (policy=quarantine) header.from=huawei.com;
> > >          spf=pass (smtp-in2.suse.de: domain of yukuai3@huawei.com
> > > designates
> > >          185.176.79.56 as permitted sender)
> > > smtp.mailfrom=yukuai3@huawei.com
> > > 
> > > So indeed frasgout.his.huawei.com is correct outgoing server which makes
> > > smtp-in2.suse.de believe the email despite missing DKIM signature.
> > > But the
> > > problem starts when you send email to a mailing list. Let me take for
> > > example your email from June 2 with Message-ID
> > > <20220602082129.2805890-1-yukuai3@huawei.com>, subject "[PATCH -next]
> > > mm/filemap: fix that first page is not mark accessed in filemap_read()".
> > > There the mailing list server forwards the email so we have:
> > > 
> > > Received: from smtp-in2.suse.de ([192.168.254.78])
> > >          (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256
> > > bits))
> > >          by dovecot-director2.suse.de with LMTPS
> > >          id 8MC5NfVvmGIPLwAApTUePA
> > >          (envelope-from <linux-fsdevel-owner@vger.kernel.org>)
> > >          for <jack@imap.suse.de>; Thu, 02 Jun 2022 08:08:21 +0000
> > > Received: from out1.vger.email (out1.vger.email
> > > [IPv6:2620:137:e000::1:20])
> > >          by smtp-in2.suse.de (Postfix) with ESMTP id 4LDJYK5bf0zZg5
> > >          for <jack@suse.cz>; Thu,  2 Jun 2022 08:08:21 +0000 (UTC)
> > > Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
> > >          id S232063AbiFBIIM (ORCPT <rfc822;jack@suse.cz>);
> > >          Thu, 2 Jun 2022 04:08:12 -0400
> > > Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO
> > >          lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by
> > >          vger.kernel.org
> > >          with ESMTP id S232062AbiFBIIL (ORCPT
> > >          <rfc822;linux-fsdevel@vger.kernel.org>);
> > >          Thu, 2 Jun 2022 04:08:11 -0400
> > > Received: from szxga02-in.huawei.com (szxga02-in.huawei.com
> > > [45.249.212.188])
> > >          by lindbergh.monkeyblade.net (Postfix) with ESMTPS id
> > >          75DDB25FE;
> > >          Thu,  2 Jun 2022 01:08:08 -0700 (PDT)
> > > 
> > > and thus smtp-in2.suse.de complains:
> > > 
> > > Authentication-Results: smtp-in2.suse.de;
> > >          dkim=none;
> > >          dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM"
> > >          header.from=huawei.com (policy=quarantine);
> > >          spf=pass (smtp-in2.suse.de: domain of
> > >          linux-fsdevel-owner@vger.kernel.org designates
> > > 2620:137:e000::1:20 as
> > >          permitted sender)
> > > smtp.mailfrom=linux-fsdevel-owner@vger.kernel.org
> > > 
> > > Because now we've got email with "From" header from huawei.com
> > > domain from
> > > a vger mail server which was forwarding it. So SPF has no chance to match
> > > (in fact SPF did pass for the Return-Path header which points to
> > > vger.kernel.org but DMARC defines that if "From" and "Return-Path" do not
> > > match, additional validation is needed - this is the "SPF not aligned
> > > (relaxed)" message above). And missing DKIM (the additional validation
> > > method) sends the email to spam.
> > 
> > Thanks a lot for your analysis, afaics, in order to fix the
> > problem, either your mail server change the configuration to set
> > alignment mode to "relaxed" instead of "strict", or our mail server
> > add correct DKIM signature for emails.
> > 
> > I'll contact with our IT support and try to add DKIM signature.
> > 
> > Thanks,
> > Kuai
> 
> Hi, Jan
> 
> Our IT support is worried that add DKIM signature will degrade
> performance, may I ask that how is your mail server configuation? policy
> is quarantine or none, and dkim signature is supportted or not.

The DMARC policy (relaxed / quarantine) is not configured on the side of
the receiving mail server but on huawei.com side. As I wrote above it is
this DMARC record in DNS of huawei.com domain that makes receiving mail
servers refuse the email without DKIM signature (if SPF does not match):

_dmarc.huawei.com.    600    IN    TXT    "v=DMARC1;p=quarantine;ruf=mailto:dmarc@huawei.com;rua=mailto:dmarc@huawei.com"

So if your IT admins do not want to introduce DKIM signatures on outgoing
email, they should set policy to 'p=none' in the DMARC DNS record to tell
that fact to receiving mail servers.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-06-08  1:21 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-28 12:08 [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion Yu Kuai
2022-04-28 12:08 ` [PATCH -next v5 1/3] block, bfq: record how many queues are busy in bfq_group Yu Kuai
2022-04-28 12:45   ` Jan Kara
2022-04-28 12:08 ` [PATCH -next v5 2/3] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
2022-04-28 12:08 ` [PATCH -next v5 3/3] block, bfq: do not idle if only one group is activated Yu Kuai
2022-05-05  1:00 ` [PATCH -next v5 0/3] support concurrent sync io for bfq on a specail occasion yukuai (C)
2022-05-14  9:29   ` yukuai (C)
2022-05-21  7:22     ` yukuai (C)
2022-05-21 12:21       ` Jens Axboe
2022-05-23  1:10         ` yukuai (C)
2022-05-23  1:24           ` Jens Axboe
2022-05-23  8:18             ` Yu Kuai
2022-05-23 12:36               ` Jens Axboe
2022-05-23 12:58                 ` Yu Kuai
2022-05-23 13:29                   ` Jens Axboe
2022-05-23  8:59           ` Jan Kara
2022-05-23 12:36             ` Jens Axboe
2022-05-23 15:25               ` Jan Kara
2022-05-24  1:13                 ` Yu Kuai
2022-06-01  6:16                   ` Jens Axboe
2022-06-01  7:19                     ` Yu Kuai
2022-06-07  3:10                 ` Yu Kuai
2022-06-07  9:54                   ` Jan Kara
2022-06-07 11:51                     ` Yu Kuai
2022-06-07 13:06                       ` Yu Kuai
2022-06-07 20:30                         ` Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).