linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net v4 0/2] fix packet stuck problem for lockless qdisc
@ 2021-04-16  1:16 Yunsheng Lin
  2021-04-16  1:16 ` [PATCH net v4 1/2] net: sched: " Yunsheng Lin
  2021-04-16  1:16 ` [PATCH net v4 2/2] net: sched: fix endless tx action reschedule during deactivation Yunsheng Lin
  0 siblings, 2 replies; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-16  1:16 UTC (permalink / raw)
  To: davem, kuba
  Cc: olteanv, ast, daniel, andriin, edumazet, weiwan, cong.wang,
	ap420073, netdev, linux-kernel, linuxarm, mkl, linux-can, jhs,
	xiyou.wangcong, jiri, andrii, kafai, songliubraving, yhs,
	john.fastabend, kpsingh, bpf, jonas.bonn, pabeni, mzhivich,
	johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

This patchset fixes the packet stuck problem mentioned in [1].

Patch 1: Add STATE_MISSED flag to fix packet stuck problem.
Patch 2: Fix a tx_action rescheduling problem after STATE_MISSED
         flag is added in patch 1.

V4: Change STATE_NEED_RESCHEDULE to STATE_MISSED and add patch 2.

[1]. https://lkml.org/lkml/2019/10/9/42

Yunsheng Lin (2):
  net: sched: fix packet stuck problem for lockless qdisc
  net: sched: fix endless tx action reschedule during deactivation

 include/net/pkt_sched.h   |  7 +------
 include/net/sch_generic.h | 37 ++++++++++++++++++++++++++++++++++++-
 net/core/dev.c            | 26 ++++++++++++++++++++++----
 net/sched/sch_generic.c   | 16 +++++++++++++++-
 4 files changed, 74 insertions(+), 12 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-16  1:16 [PATCH net v4 0/2] fix packet stuck problem for lockless qdisc Yunsheng Lin
@ 2021-04-16  1:16 ` Yunsheng Lin
  2021-04-19 15:29   ` Michal Kubecek
  2021-04-16  1:16 ` [PATCH net v4 2/2] net: sched: fix endless tx action reschedule during deactivation Yunsheng Lin
  1 sibling, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-16  1:16 UTC (permalink / raw)
  To: davem, kuba
  Cc: olteanv, ast, daniel, andriin, edumazet, weiwan, cong.wang,
	ap420073, netdev, linux-kernel, linuxarm, mkl, linux-can, jhs,
	xiyou.wangcong, jiri, andrii, kafai, songliubraving, yhs,
	john.fastabend, kpsingh, bpf, jonas.bonn, pabeni, mzhivich,
	johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

Lockless qdisc has below concurrent problem:
    cpu0                 cpu1
     .                     .
q->enqueue                 .
     .                     .
qdisc_run_begin()          .
     .                     .
dequeue_skb()              .
     .                     .
sch_direct_xmit()          .
     .                     .
     .                q->enqueue
     .             qdisc_run_begin()
     .            return and do nothing
     .                     .
qdisc_run_end()            .

cpu1 enqueue a skb without calling __qdisc_run() because cpu0
has not released the lock yet and spin_trylock() return false
for cpu1 in qdisc_run_begin(), and cpu0 do not see the skb
enqueued by cpu1 when calling dequeue_skb() because cpu1 may
enqueue the skb after cpu0 calling dequeue_skb() and before
cpu0 calling qdisc_run_end().

Lockless qdisc has below another concurrent problem when
tx_action is involved:

cpu0(serving tx_action)     cpu1             cpu2
          .                   .                .
          .              q->enqueue            .
          .            qdisc_run_begin()       .
          .              dequeue_skb()         .
          .                   .            q->enqueue
          .                   .                .
          .             sch_direct_xmit()      .
          .                   .         qdisc_run_begin()
          .                   .       return and do nothing
          .                   .                .
 clear __QDISC_STATE_SCHED    .                .
 qdisc_run_begin()            .                .
 return and do nothing        .                .
          .                   .                .
          .            qdisc_run_end()         .

This patch fixes the above data race by:
1. Test STATE_MISSED before doing spin_trylock().
2. If the first spin_trylock() return false and STATE_MISSED is
   not set before the first spin_trylock(), Set STATE_MISSED and
   retry another spin_trylock() in case other CPU may not see
   STATE_MISSED after it releases the lock.
3. reschedule if STATE_MISSED is set after the lock is released
   at the end of qdisc_run_end().

For tx_action case, STATE_MISSED is also set when cpu1 is at the
end if qdisc_run_end(), so tx_action will be rescheduled again
to dequeue the skb enqueued by cpu2.

Clear STATE_MISSED before retrying a dequeuing when dequeuing
returns NULL in order to reduce the overhead of the above double
spin_trylock() and __netif_schedule() calling.

The performance impact of this patch, tested using pktgen and
dummy netdev with pfifo_fast qdisc attached:

 threads  without+this_patch   with+this_patch      delta
    1        2.61Mpps            2.60Mpps           -0.3%
    2        3.97Mpps            3.82Mpps           -3.7%
    4        5.62Mpps            5.59Mpps           -0.5%
    8        2.78Mpps            2.77Mpps           -0.3%
   16        2.22Mpps            2.22Mpps           -0.0%

Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking")
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Tested-by: Juergen Gross <jgross@suse.com>
---
V4: Change STATE_NEED_RESCHEDULE to STATE_MISSED mirroring
    NAPI's NAPIF_STATE_MISSED, and add Juergen's "Tested-by"
    tag for there is only renaming and typo fixing between
    V4 and V3.
V3: Fix a compile error and a few comment typo, remove the
    __QDISC_STATE_DEACTIVATED checking, and update the
    performance data.
V2: Avoid the overhead of fixing the data race as much as
    possible.
---
 include/net/sch_generic.h | 37 ++++++++++++++++++++++++++++++++++++-
 net/sched/sch_generic.c   | 12 ++++++++++++
 2 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index f7a6e14..b85b8ea 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -36,6 +36,7 @@ struct qdisc_rate_table {
 enum qdisc_state_t {
 	__QDISC_STATE_SCHED,
 	__QDISC_STATE_DEACTIVATED,
+	__QDISC_STATE_MISSED,
 };
 
 struct qdisc_size_table {
@@ -159,8 +160,37 @@ static inline bool qdisc_is_empty(const struct Qdisc *qdisc)
 static inline bool qdisc_run_begin(struct Qdisc *qdisc)
 {
 	if (qdisc->flags & TCQ_F_NOLOCK) {
+		bool dont_retry = test_bit(__QDISC_STATE_MISSED,
+					   &qdisc->state);
+
+		if (spin_trylock(&qdisc->seqlock))
+			goto nolock_empty;
+
+		/* If the flag is set before doing the spin_trylock() and
+		 * the above spin_trylock() return false, it means other cpu
+		 * holding the lock will do dequeuing for us, or it wil see
+		 * the flag set after releasing lock and reschedule the
+		 * net_tx_action() to do the dequeuing.
+		 */
+		if (dont_retry)
+			return false;
+
+		/* We could do set_bit() before the first spin_trylock(),
+		 * and avoid doing second spin_trylock() completely, then
+		 * we could have multi cpus doing the set_bit(). Here use
+		 * dont_retry to avoid doing the set_bit() and the second
+		 * spin_trylock(), which has 5% performance improvement than
+		 * doing the set_bit() before the first spin_trylock().
+		 */
+		set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+
+		/* Retry again in case other CPU may not see the new flag
+		 * after it releases the lock at the end of qdisc_run_end().
+		 */
 		if (!spin_trylock(&qdisc->seqlock))
 			return false;
+
+nolock_empty:
 		WRITE_ONCE(qdisc->empty, false);
 	} else if (qdisc_is_running(qdisc)) {
 		return false;
@@ -176,8 +206,13 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
 static inline void qdisc_run_end(struct Qdisc *qdisc)
 {
 	write_seqcount_end(&qdisc->running);
-	if (qdisc->flags & TCQ_F_NOLOCK)
+	if (qdisc->flags & TCQ_F_NOLOCK) {
 		spin_unlock(&qdisc->seqlock);
+
+		if (unlikely(test_bit(__QDISC_STATE_MISSED,
+				      &qdisc->state)))
+			__netif_schedule(qdisc);
+	}
 }
 
 static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 44991ea..9bc73ea 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -640,8 +640,10 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
 {
 	struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
 	struct sk_buff *skb = NULL;
+	bool need_retry = true;
 	int band;
 
+retry:
 	for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) {
 		struct skb_array *q = band2list(priv, band);
 
@@ -652,6 +654,16 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
 	}
 	if (likely(skb)) {
 		qdisc_update_stats_at_dequeue(qdisc, skb);
+	} else if (need_retry &&
+		   test_and_clear_bit(__QDISC_STATE_MISSED,
+				      &qdisc->state)) {
+		/* do another dequeuing after clearing the flag to
+		 * avoid calling __netif_schedule().
+		 */
+		smp_mb__after_atomic();
+		need_retry = false;
+
+		goto retry;
 	} else {
 		WRITE_ONCE(qdisc->empty, true);
 	}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net v4 2/2] net: sched: fix endless tx action reschedule during deactivation
  2021-04-16  1:16 [PATCH net v4 0/2] fix packet stuck problem for lockless qdisc Yunsheng Lin
  2021-04-16  1:16 ` [PATCH net v4 1/2] net: sched: " Yunsheng Lin
@ 2021-04-16  1:16 ` Yunsheng Lin
  1 sibling, 0 replies; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-16  1:16 UTC (permalink / raw)
  To: davem, kuba
  Cc: olteanv, ast, daniel, andriin, edumazet, weiwan, cong.wang,
	ap420073, netdev, linux-kernel, linuxarm, mkl, linux-can, jhs,
	xiyou.wangcong, jiri, andrii, kafai, songliubraving, yhs,
	john.fastabend, kpsingh, bpf, jonas.bonn, pabeni, mzhivich,
	johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

Currently qdisc_run() checks the STATE_DEACTIVATED of lockless
qdisc before calling __qdisc_run(), which ultimately clear the
STATE_MISSED when all the skb is dequeued. If STATE_DEACTIVATED
is set before clearing STATE_MISSED, there may be endless
rescheduling of net_tx_action() at the end of qdisc_run_end(),
see below:

CPU0(net_tx_atcion)  CPU1(__dev_xmit_skb)  CPU2(dev_deactivate)
          .                   .                     .
          .            set STATE_MISSED             .
          .           __netif_schedule()            .
          .                   .           set STATE_DEACTIVATED
          .                   .                qdisc_reset()
          .                   .                     .
          .<---------------   .              synchronize_net()
clear __QDISC_STATE_SCHED  |  .                     .
          .                |  .                     .
          .                |  .                     .
          .                |  .           --------->.
          .                |  .          |          .
  test STATE_DEACTIVATED   |  .          | some_qdisc_is_busy()
__qdisc_run() *not* called |  .          |-----return *true*
          .                |  .                     .
   test STATE_MISS         |  .                     .
 __netif_schedule()--------|  .                     .
          .                   .                     .
          .                   .                     .

__qdisc_run() is not called by net_tx_atcion() in CPU0 because
CPU2 has set STATE_DEACTIVATED flag during dev_deactivate(), and
STATE_MISSED is only cleared in __qdisc_run(), __netif_schedule
is called endlessly at the end of qdisc_run_end(), causing endless
tx action rescheduling problem.

qdisc_run() called by net_tx_action() runs in the softirq context,
which should has the same semantic as the qdisc_run() called by
__dev_xmit_skb() protected by rcu_read_lock_bh(). And there is a
synchronize_net() between STATE_DEACTIVATED flag being set and
qdisc_reset()/some_qdisc_is_busy in dev_deactivate(), we can safely
bail out for the deactived lockless qdisc in net_tx_action(), and
qdisc_reset() will reset all skb not dequeued yet.

So add the rcu_read_lock() explicitly to protect the qdisc_run()
and do the STATE_DEACTIVATED checking in net_tx_action() before
calling qdisc_run_begin(). Another option is to do the checking in
the qdisc_run_end(), but it will add unnecessary overhead for
non-tx_action case, because __dev_queue_xmit() will not see qdisc
with STATE_DEACTIVATED after synchronize_net(), the qdisc with
STATE_DEACTIVATED can only be seen by net_tx_action() because of
__netif_schedule().

The STATE_DEACTIVATED checking in qdisc_run() is to avoid race
between net_tx_action() and qdisc_reset(), see:
commit d518d2ed8640 ("net/sched: fix race between deactivation
and dequeue for NOLOCK qdisc"). As the bailout added above for
deactived lockless qdisc in net_tx_action() provides better
protection for the race without calling qdisc_run() at all, so
remove the STATE_DEACTIVATED checking in qdisc_run().

After qdisc_reset(), there is no skb in qdisc to be dequeued, so
clear the STATE_MISSED in dev_reset_queue() too.

Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking")
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 include/net/pkt_sched.h |  7 +------
 net/core/dev.c          | 26 ++++++++++++++++++++++----
 net/sched/sch_generic.c |  4 +++-
 3 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index f5c1bee..6d7b12c 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -128,12 +128,7 @@ void __qdisc_run(struct Qdisc *q);
 static inline void qdisc_run(struct Qdisc *q)
 {
 	if (qdisc_run_begin(q)) {
-		/* NOLOCK qdisc must check 'state' under the qdisc seqlock
-		 * to avoid racing with dev_qdisc_reset()
-		 */
-		if (!(q->flags & TCQ_F_NOLOCK) ||
-		    likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
-			__qdisc_run(q);
+		__qdisc_run(q);
 		qdisc_run_end(q);
 	}
 }
diff --git a/net/core/dev.c b/net/core/dev.c
index be941ed..47cefcc 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4958,25 +4958,43 @@ static __latent_entropy void net_tx_action(struct softirq_action *h)
 		sd->output_queue_tailp = &sd->output_queue;
 		local_irq_enable();
 
+		rcu_read_lock();
+
 		while (head) {
 			struct Qdisc *q = head;
 			spinlock_t *root_lock = NULL;
 
 			head = head->next_sched;
 
-			if (!(q->flags & TCQ_F_NOLOCK)) {
-				root_lock = qdisc_lock(q);
-				spin_lock(root_lock);
-			}
 			/* We need to make sure head->next_sched is read
 			 * before clearing __QDISC_STATE_SCHED
 			 */
 			smp_mb__before_atomic();
+
+			if (!(q->flags & TCQ_F_NOLOCK)) {
+				root_lock = qdisc_lock(q);
+				spin_lock(root_lock);
+			} else if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED,
+						     &q->state))) {
+				/* There is a synchronize_net() between
+				 * STATE_DEACTIVATED flag being set and
+				 * qdisc_reset()/some_qdisc_is_busy() in
+				 * dev_deactivate(), so we can safely bail out
+				 * early here to avoid data race between
+				 * qdisc_deactivate() and some_qdisc_is_busy()
+				 * for lockless qdisc.
+				 */
+				clear_bit(__QDISC_STATE_SCHED, &q->state);
+				continue;
+			}
+
 			clear_bit(__QDISC_STATE_SCHED, &q->state);
 			qdisc_run(q);
 			if (root_lock)
 				spin_unlock(root_lock);
 		}
+
+		rcu_read_unlock();
 	}
 
 	xfrm_dev_backlog(sd);
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 9bc73ea..c32ac5b 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -1170,8 +1170,10 @@ static void dev_reset_queue(struct net_device *dev,
 	qdisc_reset(qdisc);
 
 	spin_unlock_bh(qdisc_lock(qdisc));
-	if (nolock)
+	if (nolock) {
+		clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
 		spin_unlock_bh(&qdisc->seqlock);
+	}
 }
 
 static bool some_qdisc_is_busy(struct net_device *dev)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-16  1:16 ` [PATCH net v4 1/2] net: sched: " Yunsheng Lin
@ 2021-04-19 15:29   ` Michal Kubecek
  2021-04-19 23:55     ` Michal Kubecek
  0 siblings, 1 reply; 16+ messages in thread
From: Michal Kubecek @ 2021-04-19 15:29 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, jonas.bonn,
	pabeni, mzhivich, johunt, albcamus, kehuan.feng, a.fatoum,
	atenart, alexander.duyck, hdanton, jgross, JKosina

On Fri, Apr 16, 2021 at 09:16:48AM +0800, Yunsheng Lin wrote:
> Lockless qdisc has below concurrent problem:
>     cpu0                 cpu1
>      .                     .
> q->enqueue                 .
>      .                     .
> qdisc_run_begin()          .
>      .                     .
> dequeue_skb()              .
>      .                     .
> sch_direct_xmit()          .
>      .                     .
>      .                q->enqueue
>      .             qdisc_run_begin()
>      .            return and do nothing
>      .                     .
> qdisc_run_end()            .
> 
> cpu1 enqueue a skb without calling __qdisc_run() because cpu0
> has not released the lock yet and spin_trylock() return false
> for cpu1 in qdisc_run_begin(), and cpu0 do not see the skb
> enqueued by cpu1 when calling dequeue_skb() because cpu1 may
> enqueue the skb after cpu0 calling dequeue_skb() and before
> cpu0 calling qdisc_run_end().
> 
> Lockless qdisc has below another concurrent problem when
> tx_action is involved:
> 
> cpu0(serving tx_action)     cpu1             cpu2
>           .                   .                .
>           .              q->enqueue            .
>           .            qdisc_run_begin()       .
>           .              dequeue_skb()         .
>           .                   .            q->enqueue
>           .                   .                .
>           .             sch_direct_xmit()      .
>           .                   .         qdisc_run_begin()
>           .                   .       return and do nothing
>           .                   .                .
>  clear __QDISC_STATE_SCHED    .                .
>  qdisc_run_begin()            .                .
>  return and do nothing        .                .
>           .                   .                .
>           .            qdisc_run_end()         .
> 
> This patch fixes the above data race by:
> 1. Test STATE_MISSED before doing spin_trylock().
> 2. If the first spin_trylock() return false and STATE_MISSED is
>    not set before the first spin_trylock(), Set STATE_MISSED and
>    retry another spin_trylock() in case other CPU may not see
>    STATE_MISSED after it releases the lock.
> 3. reschedule if STATE_MISSED is set after the lock is released
>    at the end of qdisc_run_end().
> 
> For tx_action case, STATE_MISSED is also set when cpu1 is at the
> end if qdisc_run_end(), so tx_action will be rescheduled again
> to dequeue the skb enqueued by cpu2.
> 
> Clear STATE_MISSED before retrying a dequeuing when dequeuing
> returns NULL in order to reduce the overhead of the above double
> spin_trylock() and __netif_schedule() calling.
> 
> The performance impact of this patch, tested using pktgen and
> dummy netdev with pfifo_fast qdisc attached:
> 
>  threads  without+this_patch   with+this_patch      delta
>     1        2.61Mpps            2.60Mpps           -0.3%
>     2        3.97Mpps            3.82Mpps           -3.7%
>     4        5.62Mpps            5.59Mpps           -0.5%
>     8        2.78Mpps            2.77Mpps           -0.3%
>    16        2.22Mpps            2.22Mpps           -0.0%
> 
> Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking")
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Tested-by: Juergen Gross <jgross@suse.com>
> ---
> V4: Change STATE_NEED_RESCHEDULE to STATE_MISSED mirroring
>     NAPI's NAPIF_STATE_MISSED, and add Juergen's "Tested-by"
>     tag for there is only renaming and typo fixing between
>     V4 and V3.
> V3: Fix a compile error and a few comment typo, remove the
>     __QDISC_STATE_DEACTIVATED checking, and update the
>     performance data.
> V2: Avoid the overhead of fixing the data race as much as
>     possible.

As pointed out in the discussion on v3, this patch may result in
significantly higher CPU consumption with multiple threads competing on
a saturated outgoing device. I missed this submission so that I haven't
checked it yet but given the description of v3->v4 changes above, it's
quite likely that it suffers from the same problem.

Michal

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-19 15:29   ` Michal Kubecek
@ 2021-04-19 23:55     ` Michal Kubecek
  2021-04-20  2:23       ` Yunsheng Lin
  2021-04-20 20:34       ` Michal Kubecek
  0 siblings, 2 replies; 16+ messages in thread
From: Michal Kubecek @ 2021-04-19 23:55 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On Mon, Apr 19, 2021 at 05:29:46PM +0200, Michal Kubecek wrote:
> 
> As pointed out in the discussion on v3, this patch may result in
> significantly higher CPU consumption with multiple threads competing on
> a saturated outgoing device. I missed this submission so that I haven't
> checked it yet but given the description of v3->v4 changes above, it's
> quite likely that it suffers from the same problem.

And it indeed does. However, with the additional patch from the v3
discussion, the numbers are approximately the same as with an unpatched
mainline kernel.

As with v4, I tried this patch on top of 5.12-rc7 with real devices.
I used two machines with 10Gb/s Intel ixgbe NICs, sender has 16 CPUs
(2 8-core CPUs with HT disabled) and 16 Rx/Tx queues, receiver has
48 CPUs (2 12-core CPUs with HT enabled) and 48 Rx/Tx queues.

  threads    5.12-rc7    5.12-rc7 + v4    5.12-rc7 + v4 + stop
     1        25.1%          38.1%            22.9%
     8        66.2%         277.0%            74.1%
    16        90.1%         150.7%            91.0%
    32       107.2%         272.6%           108.3%
    64       116.3%         487.5%           118.1%
   128       126.1%         946.7%           126.9%

(The values are normalized to one core, i.e. 100% corresponds to one
fully used logical CPU.)

So it seems that repeated scheduling while the queue was stopped is
indeed the main performance issue and that other cases of the logic
being too pessimistic do not play significant role. There is an
exception with 8 connections/threads and the result with just this
series also looks abnormally high (e.g. much higher than with
16 threads). It might be worth investigating what happens there and
what do the results with other thread counts around 8 look like.

I'll run some more tests with other traffic patterns tomorrow and
I'm also going to take a closer look at the additional patch.

Michal

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-19 23:55     ` Michal Kubecek
@ 2021-04-20  2:23       ` Yunsheng Lin
  2021-04-20 20:34       ` Michal Kubecek
  1 sibling, 0 replies; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-20  2:23 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/20 7:55, Michal Kubecek wrote:
> On Mon, Apr 19, 2021 at 05:29:46PM +0200, Michal Kubecek wrote:
>>
>> As pointed out in the discussion on v3, this patch may result in
>> significantly higher CPU consumption with multiple threads competing on
>> a saturated outgoing device. I missed this submission so that I haven't
>> checked it yet but given the description of v3->v4 changes above, it's
>> quite likely that it suffers from the same problem.
> 
> And it indeed does. However, with the additional patch from the v3
> discussion, the numbers are approximately the same as with an unpatched
> mainline kernel.
> 
> As with v4, I tried this patch on top of 5.12-rc7 with real devices.
> I used two machines with 10Gb/s Intel ixgbe NICs, sender has 16 CPUs
> (2 8-core CPUs with HT disabled) and 16 Rx/Tx queues, receiver has
> 48 CPUs (2 12-core CPUs with HT enabled) and 48 Rx/Tx queues.
> 
>   threads    5.12-rc7    5.12-rc7 + v4    5.12-rc7 + v4 + stop
>      1        25.1%          38.1%            22.9%
>      8        66.2%         277.0%            74.1%
>     16        90.1%         150.7%            91.0%
>     32       107.2%         272.6%           108.3%
>     64       116.3%         487.5%           118.1%
>    128       126.1%         946.7%           126.9%
> 
> (The values are normalized to one core, i.e. 100% corresponds to one
> fully used logical CPU.)
> 
> So it seems that repeated scheduling while the queue was stopped is
> indeed the main performance issue and that other cases of the logic
> being too pessimistic do not play significant role. There is an
> exception with 8 connections/threads and the result with just this
> series also looks abnormally high (e.g. much higher than with
> 16 threads). It might be worth investigating what happens there and
> what do the results with other thread counts around 8 look like.

Will try to investigate the 8 connections/threads case.

> 
> I'll run some more tests with other traffic patterns tomorrow and
> I'm also going to take a closer look at the additional patch.

Thanks for taking the detail testing and looking.

> 
> Michal
> 
> .
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-19 23:55     ` Michal Kubecek
  2021-04-20  2:23       ` Yunsheng Lin
@ 2021-04-20 20:34       ` Michal Kubecek
  2021-04-21  1:52         ` Yunsheng Lin
  1 sibling, 1 reply; 16+ messages in thread
From: Michal Kubecek @ 2021-04-20 20:34 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On Tue, Apr 20, 2021 at 01:55:03AM +0200, Michal Kubecek wrote:
> On Mon, Apr 19, 2021 at 05:29:46PM +0200, Michal Kubecek wrote:
> > 
> > As pointed out in the discussion on v3, this patch may result in
> > significantly higher CPU consumption with multiple threads competing on
> > a saturated outgoing device. I missed this submission so that I haven't
> > checked it yet but given the description of v3->v4 changes above, it's
> > quite likely that it suffers from the same problem.
> 
> And it indeed does. However, with the additional patch from the v3
> discussion, the numbers are approximately the same as with an unpatched
> mainline kernel.
> 
> As with v4, I tried this patch on top of 5.12-rc7 with real devices.
> I used two machines with 10Gb/s Intel ixgbe NICs, sender has 16 CPUs
> (2 8-core CPUs with HT disabled) and 16 Rx/Tx queues, receiver has
> 48 CPUs (2 12-core CPUs with HT enabled) and 48 Rx/Tx queues.
> 
>   threads    5.12-rc7    5.12-rc7 + v4    5.12-rc7 + v4 + stop
>      1        25.1%          38.1%            22.9%
>      8        66.2%         277.0%            74.1%
>     16        90.1%         150.7%            91.0%
>     32       107.2%         272.6%           108.3%
>     64       116.3%         487.5%           118.1%
>    128       126.1%         946.7%           126.9%
> 
> (The values are normalized to one core, i.e. 100% corresponds to one
> fully used logical CPU.)

I repeated the tests few more times and with more iterations and it
seems the problem rather was that the CPU utilization numbers are not
very stable, in particular with number of connections/threads close to
the number of CPUs and Tx queues. Refined results (and also other tests)
show that full 3-patch series performs similar to unpatched 5.12-rc7
(within the margin of statistical error).

However, I noticed something disturbing in the results of a simple
1-thread TCP_STREAM test (client sends data through a TCP connection to
server using long writes, we measure the amount of data received by the
server):

  server: 172.17.1.1, port 12543
  iterations: 20, threads: 1, test length: 30
  test: TCP_STREAM, message size: 1048576
  
  1     927403548.4 B/s,  avg   927403548.4 B/s, mdev           0.0 B/s (  0.0%)
  2    1176317172.1 B/s,  avg  1051860360.2 B/s, mdev   124456811.8 B/s ( 11.8%), confid. +/-  1581348251.3 B/s (150.3%)
  3     927335837.8 B/s,  avg  1010352186.1 B/s, mdev   117354970.3 B/s ( 11.6%), confid. +/-   357073677.2 B/s ( 35.3%)
  4    1176728045.1 B/s,  avg  1051946150.8 B/s, mdev   124576544.7 B/s ( 11.8%), confid. +/-   228863127.8 B/s ( 21.8%)
  5    1176788216.3 B/s,  avg  1076914563.9 B/s, mdev   122102985.3 B/s ( 11.3%), confid. +/-   169478943.5 B/s ( 15.7%)
  6    1158167055.1 B/s,  avg  1090456645.8 B/s, mdev   115504209.5 B/s ( 10.6%), confid. +/-   132805140.8 B/s ( 12.2%)
  7    1176243474.4 B/s,  avg  1102711907.0 B/s, mdev   111069717.1 B/s ( 10.1%), confid. +/-   110956822.2 B/s ( 10.1%)
  8    1176771142.8 B/s,  avg  1111969311.5 B/s, mdev   106744173.5 B/s (  9.6%), confid. +/-    95417120.0 B/s (  8.6%)
  9    1176206364.6 B/s,  avg  1119106761.8 B/s, mdev   102644185.2 B/s (  9.2%), confid. +/-    83685200.5 B/s (  7.5%)
  10   1175888409.4 B/s,  avg  1124784926.6 B/s, mdev    98855550.5 B/s (  8.8%), confid. +/-    74537085.1 B/s (  6.6%)
  11   1176541407.6 B/s,  avg  1129490061.2 B/s, mdev    95422224.8 B/s (  8.4%), confid. +/-    67230249.7 B/s (  6.0%)
  12    934185352.8 B/s,  avg  1113214668.9 B/s, mdev   106114984.5 B/s (  9.5%), confid. +/-    70420712.5 B/s (  6.3%)
  13   1176550558.1 B/s,  avg  1118086660.3 B/s, mdev   103339448.9 B/s (  9.2%), confid. +/-    65002902.4 B/s (  5.8%)
  14   1176521808.8 B/s,  avg  1122260599.5 B/s, mdev   100711151.3 B/s (  9.0%), confid. +/-    60333655.0 B/s (  5.4%)
  15   1176744840.8 B/s,  avg  1125892882.3 B/s, mdev    98240838.2 B/s (  8.7%), confid. +/-    56319052.3 B/s (  5.0%)
  16   1176593778.5 B/s,  avg  1129061688.3 B/s, mdev    95909740.8 B/s (  8.5%), confid. +/-    52771633.5 B/s (  4.7%)
  17   1176583967.4 B/s,  avg  1131857116.5 B/s, mdev    93715582.2 B/s (  8.3%), confid. +/-    49669258.6 B/s (  4.4%)
  18   1176853301.8 B/s,  avg  1134356904.5 B/s, mdev    91656530.2 B/s (  8.1%), confid. +/-    46905244.8 B/s (  4.1%)
  19   1176592845.7 B/s,  avg  1136579848.8 B/s, mdev    89709043.8 B/s (  7.9%), confid. +/-    44424855.9 B/s (  3.9%)
  20   1176608117.3 B/s,  avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
  all                     avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)

Each line shows result of one 30 second long test and average, mean
deviation and 99% confidence interval half width through the iterations
so far. While 17 iteration results are essentially the wire speed minus
TCP overhead, iterations 1, 3 and 12 are more than 20% lower. As results
of the same test on unpatched 5.12-rc7 are much more consistent (the
lowest iteration result through the whole test was 1175939718.3 and the
mean deviation only 276889.1 B/s), it doesn't seeem to be just a random
fluctuation.

I'll try to find out what happens in these outstanding iterations.

Michal

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-20 20:34       ` Michal Kubecek
@ 2021-04-21  1:52         ` Yunsheng Lin
  2021-04-21  5:31           ` Michal Kubecek
  0 siblings, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-21  1:52 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/21 4:34, Michal Kubecek wrote:
> On Tue, Apr 20, 2021 at 01:55:03AM +0200, Michal Kubecek wrote:
>> On Mon, Apr 19, 2021 at 05:29:46PM +0200, Michal Kubecek wrote:
>>>
>>> As pointed out in the discussion on v3, this patch may result in
>>> significantly higher CPU consumption with multiple threads competing on
>>> a saturated outgoing device. I missed this submission so that I haven't
>>> checked it yet but given the description of v3->v4 changes above, it's
>>> quite likely that it suffers from the same problem.
>>
>> And it indeed does. However, with the additional patch from the v3
>> discussion, the numbers are approximately the same as with an unpatched
>> mainline kernel.
>>
>> As with v4, I tried this patch on top of 5.12-rc7 with real devices.
>> I used two machines with 10Gb/s Intel ixgbe NICs, sender has 16 CPUs
>> (2 8-core CPUs with HT disabled) and 16 Rx/Tx queues, receiver has
>> 48 CPUs (2 12-core CPUs with HT enabled) and 48 Rx/Tx queues.
>>
>>   threads    5.12-rc7    5.12-rc7 + v4    5.12-rc7 + v4 + stop
>>      1        25.1%          38.1%            22.9%
>>      8        66.2%         277.0%            74.1%
>>     16        90.1%         150.7%            91.0%
>>     32       107.2%         272.6%           108.3%
>>     64       116.3%         487.5%           118.1%
>>    128       126.1%         946.7%           126.9%
>>
>> (The values are normalized to one core, i.e. 100% corresponds to one
>> fully used logical CPU.)
> 
> I repeated the tests few more times and with more iterations and it
> seems the problem rather was that the CPU utilization numbers are not
> very stable, in particular with number of connections/threads close to
> the number of CPUs and Tx queues. Refined results (and also other tests)
> show that full 3-patch series performs similar to unpatched 5.12-rc7
> (within the margin of statistical error).

Thanks for the clarifying.
I did a similar test yesterday(using 10Gb netdev and the iperf tool I has),
the initial result suggested the same.

> 
> However, I noticed something disturbing in the results of a simple
> 1-thread TCP_STREAM test (client sends data through a TCP connection to
> server using long writes, we measure the amount of data received by the
> server):
> 
>   server: 172.17.1.1, port 12543
>   iterations: 20, threads: 1, test length: 30
>   test: TCP_STREAM, message size: 1048576
>   
>   1     927403548.4 B/s,  avg   927403548.4 B/s, mdev           0.0 B/s (  0.0%)
>   2    1176317172.1 B/s,  avg  1051860360.2 B/s, mdev   124456811.8 B/s ( 11.8%), confid. +/-  1581348251.3 B/s (150.3%)
>   3     927335837.8 B/s,  avg  1010352186.1 B/s, mdev   117354970.3 B/s ( 11.6%), confid. +/-   357073677.2 B/s ( 35.3%)
>   4    1176728045.1 B/s,  avg  1051946150.8 B/s, mdev   124576544.7 B/s ( 11.8%), confid. +/-   228863127.8 B/s ( 21.8%)
>   5    1176788216.3 B/s,  avg  1076914563.9 B/s, mdev   122102985.3 B/s ( 11.3%), confid. +/-   169478943.5 B/s ( 15.7%)
>   6    1158167055.1 B/s,  avg  1090456645.8 B/s, mdev   115504209.5 B/s ( 10.6%), confid. +/-   132805140.8 B/s ( 12.2%)
>   7    1176243474.4 B/s,  avg  1102711907.0 B/s, mdev   111069717.1 B/s ( 10.1%), confid. +/-   110956822.2 B/s ( 10.1%)
>   8    1176771142.8 B/s,  avg  1111969311.5 B/s, mdev   106744173.5 B/s (  9.6%), confid. +/-    95417120.0 B/s (  8.6%)
>   9    1176206364.6 B/s,  avg  1119106761.8 B/s, mdev   102644185.2 B/s (  9.2%), confid. +/-    83685200.5 B/s (  7.5%)
>   10   1175888409.4 B/s,  avg  1124784926.6 B/s, mdev    98855550.5 B/s (  8.8%), confid. +/-    74537085.1 B/s (  6.6%)
>   11   1176541407.6 B/s,  avg  1129490061.2 B/s, mdev    95422224.8 B/s (  8.4%), confid. +/-    67230249.7 B/s (  6.0%)
>   12    934185352.8 B/s,  avg  1113214668.9 B/s, mdev   106114984.5 B/s (  9.5%), confid. +/-    70420712.5 B/s (  6.3%)
>   13   1176550558.1 B/s,  avg  1118086660.3 B/s, mdev   103339448.9 B/s (  9.2%), confid. +/-    65002902.4 B/s (  5.8%)
>   14   1176521808.8 B/s,  avg  1122260599.5 B/s, mdev   100711151.3 B/s (  9.0%), confid. +/-    60333655.0 B/s (  5.4%)
>   15   1176744840.8 B/s,  avg  1125892882.3 B/s, mdev    98240838.2 B/s (  8.7%), confid. +/-    56319052.3 B/s (  5.0%)
>   16   1176593778.5 B/s,  avg  1129061688.3 B/s, mdev    95909740.8 B/s (  8.5%), confid. +/-    52771633.5 B/s (  4.7%)
>   17   1176583967.4 B/s,  avg  1131857116.5 B/s, mdev    93715582.2 B/s (  8.3%), confid. +/-    49669258.6 B/s (  4.4%)
>   18   1176853301.8 B/s,  avg  1134356904.5 B/s, mdev    91656530.2 B/s (  8.1%), confid. +/-    46905244.8 B/s (  4.1%)
>   19   1176592845.7 B/s,  avg  1136579848.8 B/s, mdev    89709043.8 B/s (  7.9%), confid. +/-    44424855.9 B/s (  3.9%)
>   20   1176608117.3 B/s,  avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
>   all                     avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
> 
> Each line shows result of one 30 second long test and average, mean
> deviation and 99% confidence interval half width through the iterations
> so far. While 17 iteration results are essentially the wire speed minus
> TCP overhead, iterations 1, 3 and 12 are more than 20% lower. As results
> of the same test on unpatched 5.12-rc7 are much more consistent (the
> lowest iteration result through the whole test was 1175939718.3 and the
> mean deviation only 276889.1 B/s), it doesn't seeem to be just a random
> fluctuation.

I think I need to relearn the statistial math to understand the above
"99% confidence interval half width ":)

But the problem do not seems related too much with "99% confidence
interval half width ", but with "mean deviation"?

I tried using netperf, which seems only show throughput of 9415.06
(10^6bits/sec) using 10G netdev. which tool did you used to show the
above number?

> 
> I'll try to find out what happens in these outstanding iterations.

As my understanding, if there is only one thread, the first spin_trylock()
is likely to return true for the xmiting thread, unless the scheduled
net_tx_action() cause the xmiting thread's spin_trylock() returning false?

Thanks for comprehensive testing and analysing.

> 
> Michal
> 
> .
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-21  1:52         ` Yunsheng Lin
@ 2021-04-21  5:31           ` Michal Kubecek
  2021-04-21  8:21             ` Yunsheng Lin
  0 siblings, 1 reply; 16+ messages in thread
From: Michal Kubecek @ 2021-04-21  5:31 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On Wed, Apr 21, 2021 at 09:52:40AM +0800, Yunsheng Lin wrote:
> On 2021/4/21 4:34, Michal Kubecek wrote:
> > However, I noticed something disturbing in the results of a simple
> > 1-thread TCP_STREAM test (client sends data through a TCP connection to
> > server using long writes, we measure the amount of data received by the
> > server):
> > 
> >   server: 172.17.1.1, port 12543
> >   iterations: 20, threads: 1, test length: 30
> >   test: TCP_STREAM, message size: 1048576
> >   
> >   1     927403548.4 B/s,  avg   927403548.4 B/s, mdev           0.0 B/s (  0.0%)
> >   2    1176317172.1 B/s,  avg  1051860360.2 B/s, mdev   124456811.8 B/s ( 11.8%), confid. +/-  1581348251.3 B/s (150.3%)
> >   3     927335837.8 B/s,  avg  1010352186.1 B/s, mdev   117354970.3 B/s ( 11.6%), confid. +/-   357073677.2 B/s ( 35.3%)
> >   4    1176728045.1 B/s,  avg  1051946150.8 B/s, mdev   124576544.7 B/s ( 11.8%), confid. +/-   228863127.8 B/s ( 21.8%)
> >   5    1176788216.3 B/s,  avg  1076914563.9 B/s, mdev   122102985.3 B/s ( 11.3%), confid. +/-   169478943.5 B/s ( 15.7%)
> >   6    1158167055.1 B/s,  avg  1090456645.8 B/s, mdev   115504209.5 B/s ( 10.6%), confid. +/-   132805140.8 B/s ( 12.2%)
> >   7    1176243474.4 B/s,  avg  1102711907.0 B/s, mdev   111069717.1 B/s ( 10.1%), confid. +/-   110956822.2 B/s ( 10.1%)
> >   8    1176771142.8 B/s,  avg  1111969311.5 B/s, mdev   106744173.5 B/s (  9.6%), confid. +/-    95417120.0 B/s (  8.6%)
> >   9    1176206364.6 B/s,  avg  1119106761.8 B/s, mdev   102644185.2 B/s (  9.2%), confid. +/-    83685200.5 B/s (  7.5%)
> >   10   1175888409.4 B/s,  avg  1124784926.6 B/s, mdev    98855550.5 B/s (  8.8%), confid. +/-    74537085.1 B/s (  6.6%)
> >   11   1176541407.6 B/s,  avg  1129490061.2 B/s, mdev    95422224.8 B/s (  8.4%), confid. +/-    67230249.7 B/s (  6.0%)
> >   12    934185352.8 B/s,  avg  1113214668.9 B/s, mdev   106114984.5 B/s (  9.5%), confid. +/-    70420712.5 B/s (  6.3%)
> >   13   1176550558.1 B/s,  avg  1118086660.3 B/s, mdev   103339448.9 B/s (  9.2%), confid. +/-    65002902.4 B/s (  5.8%)
> >   14   1176521808.8 B/s,  avg  1122260599.5 B/s, mdev   100711151.3 B/s (  9.0%), confid. +/-    60333655.0 B/s (  5.4%)
> >   15   1176744840.8 B/s,  avg  1125892882.3 B/s, mdev    98240838.2 B/s (  8.7%), confid. +/-    56319052.3 B/s (  5.0%)
> >   16   1176593778.5 B/s,  avg  1129061688.3 B/s, mdev    95909740.8 B/s (  8.5%), confid. +/-    52771633.5 B/s (  4.7%)
> >   17   1176583967.4 B/s,  avg  1131857116.5 B/s, mdev    93715582.2 B/s (  8.3%), confid. +/-    49669258.6 B/s (  4.4%)
> >   18   1176853301.8 B/s,  avg  1134356904.5 B/s, mdev    91656530.2 B/s (  8.1%), confid. +/-    46905244.8 B/s (  4.1%)
> >   19   1176592845.7 B/s,  avg  1136579848.8 B/s, mdev    89709043.8 B/s (  7.9%), confid. +/-    44424855.9 B/s (  3.9%)
> >   20   1176608117.3 B/s,  avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
> >   all                     avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
> > 
> > Each line shows result of one 30 second long test and average, mean
> > deviation and 99% confidence interval half width through the iterations
> > so far. While 17 iteration results are essentially the wire speed minus
> > TCP overhead, iterations 1, 3 and 12 are more than 20% lower. As results
> > of the same test on unpatched 5.12-rc7 are much more consistent (the
> > lowest iteration result through the whole test was 1175939718.3 and the
> > mean deviation only 276889.1 B/s), it doesn't seeem to be just a random
> > fluctuation.
> 
> I think I need to relearn the statistial math to understand the above
> "99% confidence interval half width ":)

An easy way to understand it is that if the last column shows 42 MB/s,
it means that with 99% confidence (probability), the measured average
is within 42 MB/s off the actual one.

> But the problem do not seems related too much with "99% confidence
> interval half width ", but with "mean deviation"?

Mean deviation is a symptom here. What worries me is that most results
show the same value (corresponding to fully saturated line) with very
little variation, in some iterations (1, 3 and 12 here) we can suddenly
see much lower value (by ~2.5 GB/s, i.e. 20-25%). And as each iteration
runs the connection for 30 seconds, it cannot be just some short glitch.

I managed to get tcpdump captures yesterday but they are huge even with
"-s 128" (client ~5.6 GB, server ~9.0 GB) so that working with them is
rather slow so I did not find anything interesting yet.

> I tried using netperf, which seems only show throughput of 9415.06
> (10^6bits/sec) using 10G netdev. which tool did you used to show the
> above number?

9415.06 * 10^6 b/s is 1176.9 * 10^6 B/s so it's about the same as the
numbers above (the good ones, that is). As this was part of a longer
test trying different thread counts from 1 to 128, I was using another
utility I started writing recently:

  https://github.com/mkubecek/nperf

It is still raw and a lot of features are missing but it can be already
used for multithreaded TCP_STREAM and TCP_RR tests. In particular, the
output above was with

  nperf -H 172.17.1.1 -l 30 -i 20 --exact -t TCP_STREAM -M 1

The results are with 1 thread so that they should be also reproducible
with netperf too. But it needs to be repeated enough times, when
I wanted to get the packet captures, I did 40 iterations and only two of
them showed lower result.

Michal

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-21  5:31           ` Michal Kubecek
@ 2021-04-21  8:21             ` Yunsheng Lin
  2021-04-21  8:44               ` Michal Kubecek
  0 siblings, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-21  8:21 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/21 13:31, Michal Kubecek wrote:
> On Wed, Apr 21, 2021 at 09:52:40AM +0800, Yunsheng Lin wrote:
>> On 2021/4/21 4:34, Michal Kubecek wrote:
>>> However, I noticed something disturbing in the results of a simple
>>> 1-thread TCP_STREAM test (client sends data through a TCP connection to
>>> server using long writes, we measure the amount of data received by the
>>> server):
>>>
>>>   server: 172.17.1.1, port 12543
>>>   iterations: 20, threads: 1, test length: 30
>>>   test: TCP_STREAM, message size: 1048576
>>>   
>>>   1     927403548.4 B/s,  avg   927403548.4 B/s, mdev           0.0 B/s (  0.0%)
>>>   2    1176317172.1 B/s,  avg  1051860360.2 B/s, mdev   124456811.8 B/s ( 11.8%), confid. +/-  1581348251.3 B/s (150.3%)
>>>   3     927335837.8 B/s,  avg  1010352186.1 B/s, mdev   117354970.3 B/s ( 11.6%), confid. +/-   357073677.2 B/s ( 35.3%)
>>>   4    1176728045.1 B/s,  avg  1051946150.8 B/s, mdev   124576544.7 B/s ( 11.8%), confid. +/-   228863127.8 B/s ( 21.8%)
>>>   5    1176788216.3 B/s,  avg  1076914563.9 B/s, mdev   122102985.3 B/s ( 11.3%), confid. +/-   169478943.5 B/s ( 15.7%)
>>>   6    1158167055.1 B/s,  avg  1090456645.8 B/s, mdev   115504209.5 B/s ( 10.6%), confid. +/-   132805140.8 B/s ( 12.2%)
>>>   7    1176243474.4 B/s,  avg  1102711907.0 B/s, mdev   111069717.1 B/s ( 10.1%), confid. +/-   110956822.2 B/s ( 10.1%)
>>>   8    1176771142.8 B/s,  avg  1111969311.5 B/s, mdev   106744173.5 B/s (  9.6%), confid. +/-    95417120.0 B/s (  8.6%)
>>>   9    1176206364.6 B/s,  avg  1119106761.8 B/s, mdev   102644185.2 B/s (  9.2%), confid. +/-    83685200.5 B/s (  7.5%)
>>>   10   1175888409.4 B/s,  avg  1124784926.6 B/s, mdev    98855550.5 B/s (  8.8%), confid. +/-    74537085.1 B/s (  6.6%)
>>>   11   1176541407.6 B/s,  avg  1129490061.2 B/s, mdev    95422224.8 B/s (  8.4%), confid. +/-    67230249.7 B/s (  6.0%)
>>>   12    934185352.8 B/s,  avg  1113214668.9 B/s, mdev   106114984.5 B/s (  9.5%), confid. +/-    70420712.5 B/s (  6.3%)
>>>   13   1176550558.1 B/s,  avg  1118086660.3 B/s, mdev   103339448.9 B/s (  9.2%), confid. +/-    65002902.4 B/s (  5.8%)
>>>   14   1176521808.8 B/s,  avg  1122260599.5 B/s, mdev   100711151.3 B/s (  9.0%), confid. +/-    60333655.0 B/s (  5.4%)
>>>   15   1176744840.8 B/s,  avg  1125892882.3 B/s, mdev    98240838.2 B/s (  8.7%), confid. +/-    56319052.3 B/s (  5.0%)
>>>   16   1176593778.5 B/s,  avg  1129061688.3 B/s, mdev    95909740.8 B/s (  8.5%), confid. +/-    52771633.5 B/s (  4.7%)
>>>   17   1176583967.4 B/s,  avg  1131857116.5 B/s, mdev    93715582.2 B/s (  8.3%), confid. +/-    49669258.6 B/s (  4.4%)
>>>   18   1176853301.8 B/s,  avg  1134356904.5 B/s, mdev    91656530.2 B/s (  8.1%), confid. +/-    46905244.8 B/s (  4.1%)
>>>   19   1176592845.7 B/s,  avg  1136579848.8 B/s, mdev    89709043.8 B/s (  7.9%), confid. +/-    44424855.9 B/s (  3.9%)
>>>   20   1176608117.3 B/s,  avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
>>>   all                     avg  1138581262.2 B/s, mdev    87871692.6 B/s (  7.7%), confid. +/-    42193098.5 B/s (  3.7%)
>>>
>>> Each line shows result of one 30 second long test and average, mean
>>> deviation and 99% confidence interval half width through the iterations
>>> so far. While 17 iteration results are essentially the wire speed minus
>>> TCP overhead, iterations 1, 3 and 12 are more than 20% lower. As results
>>> of the same test on unpatched 5.12-rc7 are much more consistent (the
>>> lowest iteration result through the whole test was 1175939718.3 and the
>>> mean deviation only 276889.1 B/s), it doesn't seeem to be just a random
>>> fluctuation.
>>
>> I think I need to relearn the statistial math to understand the above
>> "99% confidence interval half width ":)
> 
> An easy way to understand it is that if the last column shows 42 MB/s,
> it means that with 99% confidence (probability), the measured average
> is within 42 MB/s off the actual one.
> 
>> But the problem do not seems related too much with "99% confidence
>> interval half width ", but with "mean deviation"?
> 
> Mean deviation is a symptom here. What worries me is that most results
> show the same value (corresponding to fully saturated line) with very
> little variation, in some iterations (1, 3 and 12 here) we can suddenly
> see much lower value (by ~2.5 GB/s, i.e. 20-25%). And as each iteration
> runs the connection for 30 seconds, it cannot be just some short glitch.
> 
> I managed to get tcpdump captures yesterday but they are huge even with
> "-s 128" (client ~5.6 GB, server ~9.0 GB) so that working with them is
> rather slow so I did not find anything interesting yet.
> 
>> I tried using netperf, which seems only show throughput of 9415.06
>> (10^6bits/sec) using 10G netdev. which tool did you used to show the
>> above number?
> 
> 9415.06 * 10^6 b/s is 1176.9 * 10^6 B/s so it's about the same as the
> numbers above (the good ones, that is). As this was part of a longer
> test trying different thread counts from 1 to 128, I was using another
> utility I started writing recently:

I tried using below shell to simulate your testcase:

#!/bin/sh

for((i=0; i<20; i++))
do
        taskset -c 0-31 netperf -t TCP_STREAM -H 192.168.100.2 -l 30 -- -m 1048576
done

And I got a quite stable result: 9413~9416 (10^6bits/sec) for 10G netdev.

> 
>   https://github.com/mkubecek/nperf
> 
> It is still raw and a lot of features are missing but it can be already
> used for multithreaded TCP_STREAM and TCP_RR tests. In particular, the
> output above was with
> 
>   nperf -H 172.17.1.1 -l 30 -i 20 --exact -t TCP_STREAM -M 1

I tried your nperf too, unfortunately it does not seem to work on my
system(arm64), which exits very quickly and output the blow result:

root@(none):/home# nperf -H 192.168.100.2 -l 30 -i 20 --exact -t TCP_STREAM -M 1
server: 192.168.100.2, port 12543
iterations: 20, threads: 1, test length: 30
test: TCP_STREAM, message size: 1048576

1             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%)
2             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
3             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           nan B/s (  nan%)
4             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
5             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
6             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
7             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
8             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
9             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
10            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
11            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
12            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
13            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
14            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
15            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
16            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
17            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
18            4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
19            4.0 B/s,  avg           4.0 B/s, mdev           nan B/s (  nan%), confid. +/-           nan B/s (  nan%)
20            4.0 B/s,  avg           4.0 B/s, mdev           nan B/s (  nan%), confid. +/-           nan B/s (  nan%)
all                     avg           4.0 B/s, mdev           nan B/s (  nan%), confid. +/-           nan B/s (  nan%)

> 
> The results are with 1 thread so that they should be also reproducible
> with netperf too. But it needs to be repeated enough times, when
> I wanted to get the packet captures, I did 40 iterations and only two of
> them showed lower result.
> 
> Michal
> 
> .
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-21  8:21             ` Yunsheng Lin
@ 2021-04-21  8:44               ` Michal Kubecek
  2021-04-21  9:25                 ` Yunsheng Lin
  0 siblings, 1 reply; 16+ messages in thread
From: Michal Kubecek @ 2021-04-21  8:44 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On Wed, Apr 21, 2021 at 04:21:54PM +0800, Yunsheng Lin wrote:
> 
> I tried using below shell to simulate your testcase:
> 
> #!/bin/sh
> 
> for((i=0; i<20; i++))
> do
>         taskset -c 0-31 netperf -t TCP_STREAM -H 192.168.100.2 -l 30 -- -m 1048576
> done
> 
> And I got a quite stable result: 9413~9416 (10^6bits/sec) for 10G netdev.

Perhaps try it without the taskset, in my test, there was only one
connection.

> > 
> >   https://github.com/mkubecek/nperf
> > 
> > It is still raw and a lot of features are missing but it can be already
> > used for multithreaded TCP_STREAM and TCP_RR tests. In particular, the
> > output above was with
> > 
> >   nperf -H 172.17.1.1 -l 30 -i 20 --exact -t TCP_STREAM -M 1
> 
> I tried your nperf too, unfortunately it does not seem to work on my
> system(arm64), which exits very quickly and output the blow result:
> 
> root@(none):/home# nperf -H 192.168.100.2 -l 30 -i 20 --exact -t TCP_STREAM -M 1
> server: 192.168.100.2, port 12543
> iterations: 20, threads: 1, test length: 30
> test: TCP_STREAM, message size: 1048576
> 
> 1             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%)
[...]

Did you start nperfd on the other side? (It plays a role similar to
netserver for netperf.) Few days ago I noticed that there is something
wrong with error handling in case of failed connection but didn't get to
fixing it yet.

I'll try running some tests also on other architectures, including arm64
and s390x (to catch potential endinanity issues).

Michal

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-21  8:44               ` Michal Kubecek
@ 2021-04-21  9:25                 ` Yunsheng Lin
  2021-04-23  9:42                   ` Yunsheng Lin
  0 siblings, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-21  9:25 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/21 16:44, Michal Kubecek wrote:
> On Wed, Apr 21, 2021 at 04:21:54PM +0800, Yunsheng Lin wrote:
>>
>> I tried using below shell to simulate your testcase:
>>
>> #!/bin/sh
>>
>> for((i=0; i<20; i++))
>> do
>>         taskset -c 0-31 netperf -t TCP_STREAM -H 192.168.100.2 -l 30 -- -m 1048576
>> done
>>
>> And I got a quite stable result: 9413~9416 (10^6bits/sec) for 10G netdev.
> 
> Perhaps try it without the taskset, in my test, there was only one
> connection.

Just tried, and got the similar result as above.

> 
>>>
>>>   https://github.com/mkubecek/nperf
>>>
>>> It is still raw and a lot of features are missing but it can be already
>>> used for multithreaded TCP_STREAM and TCP_RR tests. In particular, the
>>> output above was with
>>>
>>>   nperf -H 172.17.1.1 -l 30 -i 20 --exact -t TCP_STREAM -M 1
>>
>> I tried your nperf too, unfortunately it does not seem to work on my
>> system(arm64), which exits very quickly and output the blow result:
>>
>> root@(none):/home# nperf -H 192.168.100.2 -l 30 -i 20 --exact -t TCP_STREAM -M 1
>> server: 192.168.100.2, port 12543
>> iterations: 20, threads: 1, test length: 30
>> test: TCP_STREAM, message size: 1048576
>>
>> 1             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%)
> [...]
> 
> Did you start nperfd on the other side? (It plays a role similar to
> netserver for netperf.) Few days ago I noticed that there is something
> wrong with error handling in case of failed connection but didn't get to
> fixing it yet.

Yes, I did. If I did not start nperfd, I got "connect: Connection refused"
as below:

nperf -H 192.168.100.2 -l 30 -i 20 --exact -t TCP_STREAM -M 1
server: 192.168.100.2, port 12543
iterations: 20, threads: 1, test length: 30
test: TCP_STREAM, message size: 1048576

connect: Connection refused
failed to connect to '192.168.100.2'
1             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%)
connect: Connection refused
failed to connect to '192.168.100.2'
2             4.0 B/s,  avg           4.0 B/s, mdev           0.0 B/s (  0.0%), confid. +/-           0.0 B/s (  0.0%)
connect: Connection refused
failed to connect to '192.168.100.2'



> 
> I'll try running some tests also on other architectures, including arm64
> and s390x (to catch potential endinanity issues).
> 
> Michal
> 
> .
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-21  9:25                 ` Yunsheng Lin
@ 2021-04-23  9:42                   ` Yunsheng Lin
  2021-04-30  3:11                     ` Yunsheng Lin
  0 siblings, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-23  9:42 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/21 17:25, Yunsheng Lin wrote:
> On 2021/4/21 16:44, Michal Kubecek wrote:
> 
>>
>> I'll try running some tests also on other architectures, including arm64
>> and s390x (to catch potential endinanity issues).

I tried debugging nperf in arm64, with the below patch:
diff --git a/client/main.c b/client/main.c
index 429634d..de1a3ef 100644
--- a/client/main.c
+++ b/client/main.c
@@ -63,7 +63,10 @@ static int client_init(void)
        ret = client_set_usr1_handler();
        if (ret < 0)
                return ret;
-       return ignore_signal(SIGPIPE);
+       //return ignore_signal(SIGPIPE);
+       signal(SIGPIPE, SIG_IGN);
+
+       return 0;
 }

 static int ctrl_send_start(struct client_config *config)
diff --git a/client/worker.c b/client/worker.c
index ac026893..d269311 100644
--- a/client/worker.c
+++ b/client/worker.c
@@ -7,7 +7,7 @@
 #include "worker.h"
 #include "main.h"

-#define WORKER_STACK_SIZE 16384
+#define WORKER_STACK_SIZE 131072

 struct client_worker_data *workers_data;
 union sockaddr_any test_addr;

It has below error output:

../nperf/nperf -H 127.0.0.1 -l 3 -i 1 --exact -t TCP_STREAM -M 1
server: 127.0.0.1, port 12543
iterations: 1, threads: 1, test length: 3
test: TCP_STREAM, message size: 1048576

run test begin
send begin
send done: -32
failed to receive server stats
*** Iteration 1 failed, quitting. ***


Tcpdump has below output:
09:55:12.253341 IP localhost.53080 > localhost.12543: Flags [S], seq 3954442980, win 65495, options [mss 65495,sackOK,TS val 3268837738 ecr 0,nop,wscale 7], length 0
09:55:12.253363 IP localhost.12543 > localhost.53080: Flags [S.], seq 4240541653, ack 3954442981, win 65483, options [mss 65495,sackOK,TS val 3268837738 ecr 3268837738,nop,wscale 7], length 0
09:55:12.253379 IP localhost.53080 > localhost.12543: Flags [.], ack 1, win 512, options [nop,nop,TS val 3268837738 ecr 3268837738], length 0
09:55:12.253412 IP localhost.53080 > localhost.12543: Flags [P.], seq 1:29, ack 1, win 512, options [nop,nop,TS val 3268837738 ecr 3268837738], length 28
09:55:12.253863 IP localhost.12543 > localhost.53080: Flags [P.], seq 1:17, ack 29, win 512, options [nop,nop,TS val 3268837739 ecr 3268837738], length 16
09:55:12.253891 IP localhost.53080 > localhost.12543: Flags [.], ack 17, win 512, options [nop,nop,TS val 3268837739 ecr 3268837739], length 0
09:55:12.254265 IP localhost.12543 > localhost.53080: Flags [F.], seq 17, ack 29, win 512, options [nop,nop,TS val 3268837739 ecr 3268837739], length 0
09:55:12.301992 IP localhost.53080 > localhost.12543: Flags [.], ack 18, win 512, options [nop,nop,TS val 3268837787 ecr 3268837739], length 0
09:55:15.254389 IP localhost.53080 > localhost.12543: Flags [F.], seq 29, ack 18, win 512, options [nop,nop,TS val 3268840739 ecr 3268837739], length 0
09:55:15.254426 IP localhost.12543 > localhost.53080: Flags [.], ack 30, win 512, options [nop,nop,TS val 3268840739 ecr 3268840739], length 0


Any idea what went wrong here?

Also, Would you mind running netperf to see if there is similar issue
in your system?

>>
>> Michal
>>
>> .
>>
> 
> 
> .
> 


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-23  9:42                   ` Yunsheng Lin
@ 2021-04-30  3:11                     ` Yunsheng Lin
  2021-04-30  3:15                       ` Yunsheng Lin
  0 siblings, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-30  3:11 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/23 17:42, Yunsheng Lin wrote:
> On 2021/4/21 17:25, Yunsheng Lin wrote:
>> On 2021/4/21 16:44, Michal Kubecek wrote:
>>
>>>
>>> I'll try running some tests also on other architectures, including arm64
>>> and s390x (to catch potential endinanity issues).
> 
> I tried debugging nperf in arm64, with the below patch:
>
> Any idea what went wrong here?
> 
> Also, Would you mind running netperf to see if there is similar issue
> in your system?

Hi, Michal
    I was able to reproduce the fluctuation for one thread TCP_STREAM test,
So I am assuming it may more related to test environment or nperf issue.

   I plan to send v5 with netdev queue stopped handling after the golden
holiday in china. If there is any issue with patchset, please let me know,
thanks.

> 
>>>
>>> Michal
>>>
>>> .
>>>
>>
>>
>> .
>>
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-30  3:11                     ` Yunsheng Lin
@ 2021-04-30  3:15                       ` Yunsheng Lin
  2021-04-30  6:28                         ` Michal Kubecek
  0 siblings, 1 reply; 16+ messages in thread
From: Yunsheng Lin @ 2021-04-30  3:15 UTC (permalink / raw)
  To: Michal Kubecek
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On 2021/4/30 11:11, Yunsheng Lin wrote:
> On 2021/4/23 17:42, Yunsheng Lin wrote:
>> On 2021/4/21 17:25, Yunsheng Lin wrote:
>>> On 2021/4/21 16:44, Michal Kubecek wrote:
>>>
>>>>
>>>> I'll try running some tests also on other architectures, including arm64
>>>> and s390x (to catch potential endinanity issues).
>>
>> I tried debugging nperf in arm64, with the below patch:
>>
>> Any idea what went wrong here?
>>
>> Also, Would you mind running netperf to see if there is similar issue
>> in your system?
> 
> Hi, Michal
>     I was able to reproduce the fluctuation for one thread TCP_STREAM test,

I was *not* able
Sorry for the typo.

> So I am assuming it may more related to test environment or nperf issue.
> 
>    I plan to send v5 with netdev queue stopped handling after the golden
> holiday in china. If there is any issue with patchset, please let me know,
> thanks.
> 
>>
>>>>
>>>> Michal
>>>>
>>>> .
>>>>
>>>
>>>
>>> .
>>>
>>
>>
>> .
>>
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
  2021-04-30  3:15                       ` Yunsheng Lin
@ 2021-04-30  6:28                         ` Michal Kubecek
  0 siblings, 0 replies; 16+ messages in thread
From: Michal Kubecek @ 2021-04-30  6:28 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, olteanv, ast, daniel, andriin, edumazet, weiwan,
	cong.wang, ap420073, netdev, linux-kernel, linuxarm, mkl,
	linux-can, jhs, xiyou.wangcong, jiri, andrii, kafai,
	songliubraving, yhs, john.fastabend, kpsingh, bpf, pabeni,
	mzhivich, johunt, albcamus, kehuan.feng, a.fatoum, atenart,
	alexander.duyck, hdanton, jgross, JKosina

On Fri, Apr 30, 2021 at 11:15:01AM +0800, Yunsheng Lin wrote:
> On 2021/4/30 11:11, Yunsheng Lin wrote:
> > On 2021/4/23 17:42, Yunsheng Lin wrote:
> >> On 2021/4/21 17:25, Yunsheng Lin wrote:
> >>> On 2021/4/21 16:44, Michal Kubecek wrote:
> >>>
> >>>>
> >>>> I'll try running some tests also on other architectures, including arm64
> >>>> and s390x (to catch potential endinanity issues).
> >>
> >> I tried debugging nperf in arm64, with the below patch:
> >>
> >> Any idea what went wrong here?
> >>
> >> Also, Would you mind running netperf to see if there is similar issue
> >> in your system?
> > 
> > Hi, Michal
> >     I was able to reproduce the fluctuation for one thread TCP_STREAM test,
> 
> I was *not* able
> Sorry for the typo.

I was able to reproduce the same problem with netperf:

marfak:~ # for i in {1..60}; do netperf -H 172.17.1.1 -l 30 -t TCP_STREAM -- -m 1048576; done
131072  16384 1048576    30.00    9413.36   
131072  16384 1048576    30.01    7473.68   <---
131072  16384 1048576    30.00    9413.97   
131072  16384 1048576    30.00    9413.76   
131072  16384 1048576    30.01    9024.25   
131072  16384 1048576    30.01    8364.78   
131072  16384 1048576    30.00    9413.22   
131072  16384 1048576    30.00    9414.29   
131072  16384 1048576    30.00    9414.32   
131072  16384 1048576    30.00    9412.58   
131072  16384 1048576    30.00    9412.79   
131072  16384 1048576    30.00    9413.18   
131072  16384 1048576    30.01    8771.57   <---
131072  16384 1048576    30.00    9414.01   
131072  16384 1048576    30.00    9413.93   
131072  16384 1048576    30.00    9413.97   
131072  16384 1048576    30.00    9414.05   
131072  16384 1048576    30.00    9412.92   
131072  16384 1048576    30.00    9413.40   
131072  16384 1048576    30.00    9414.41   
131072  16384 1048576    30.00    9413.25   
131072  16384 1048576    30.00    9413.38   
131072  16384 1048576    30.00    9412.28   
131072  16384 1048576    30.00    9413.50   
131072  16384 1048576    30.00    9414.12   
131072  16384 1048576    30.00    9414.27   
131072  16384 1048576    30.00    9412.96   
131072  16384 1048576    30.00    9413.71   
131072  16384 1048576    30.01    9205.98   
131072  16384 1048576    30.00    9413.69   
131072  16384 1048576    30.00    9413.60   
131072  16384 1048576    30.01    8297.03   <---
131072  16384 1048576    30.00    9414.09   
131072  16384 1048576    30.00    9414.38   
131072  16384 1048576    30.00    9413.62   
131072  16384 1048576    30.00    9411.09   
131072  16384 1048576    30.00    9414.37   
131072  16384 1048576    30.00    9414.37   
131072  16384 1048576    30.00    9412.52   
131072  16384 1048576    30.00    9414.06   
131072  16384 1048576    30.00    9413.66   
131072  16384 1048576    30.00    9411.63   
131072  16384 1048576    30.00    9414.17   
131072  16384 1048576    30.00    9414.07   
131072  16384 1048576    30.00    9414.09   
131072  16384 1048576    30.00    9414.37   
131072  16384 1048576    30.00    9390.00   
131072  16384 1048576    30.00    9413.72   
131072  16384 1048576    30.01    9260.97   
131072  16384 1048576    30.01    9334.91   
131072  16384 1048576    30.00    9413.57   
131072  16384 1048576    30.00    9412.01   
131072  16384 1048576    30.00    9414.36   
131072  16384 1048576    30.00    9412.47   
131072  16384 1048576    30.00    9413.73   
131072  16384 1048576    30.00    9413.48   
131072  16384 1048576    30.00    9413.36   
131072  16384 1048576    30.01    9327.42   
131072  16384 1048576    30.01    9240.33   
131072  16384 1048576    30.00    9413.97   

(filtered only the interesting lines)

But after some more testing, I was also able to see similar results with
unpatched mainline kernel:

131072  16384 1048576    30.00    9413.28   
131072  16384 1048576    30.01    9007.17   
131072  16384 1048576    30.01    9153.22   
131072  16384 1048576    30.00    9414.28   
131072  16384 1048576    30.01    9244.68   
131072  16384 1048576    30.01    9230.49   
131072  16384 1048576    30.00    8723.24   <---
131072  16384 1048576    30.01    8289.21   <---
131072  16384 1048576    30.01    9258.33   
131072  16384 1048576    30.00    9251.47   
131072  16384 1048576    30.00    9414.23   
131072  16384 1048576    30.01    9276.87   
131072  16384 1048576    30.01    9255.61   
131072  16384 1048576    30.00    9072.78   
131072  16384 1048576    30.00    9412.09   
131072  16384 1048576    30.01    9393.00   
131072  16384 1048576    30.00    9413.39   
131072  16384 1048576    30.01    9404.01   
131072  16384 1048576    30.01    8412.83   <---
131072  16384 1048576    30.01    9368.23   
131072  16384 1048576    30.01    9259.11   
131072  16384 1048576    30.01    9121.65   
131072  16384 1048576    30.01    9169.87   
131072  16384 1048576    30.01    9154.03   
131072  16384 1048576    30.01    9336.34   
131072  16384 1048576    30.00    9187.73   
131072  16384 1048576    30.00    9412.54   
131072  16384 1048576    30.01    6836.37   <---
131072  16384 1048576    30.01    9388.09   
131072  16384 1048576    30.01    8755.78   <---
131072  16384 1048576    30.01    9167.63   
131072  16384 1048576    30.00    9410.80   
131072  16384 1048576    30.01    9392.71   
131072  16384 1048576    30.01    9238.50   
131072  16384 1048576    30.01    9382.78   
131072  16384 1048576    30.01    9328.23   
131072  16384 1048576    30.01    9396.04   
131072  16384 1048576    30.01    9286.10   
131072  16384 1048576    30.00    9412.44   
131072  16384 1048576    30.01    7952.34   <---
131072  16384 1048576    30.01    9309.95   
131072  16384 1048576    30.00    9133.38   
131072  16384 1048576    30.01    8672.75   
131072  16384 1048576    30.00    9414.28   
131072  16384 1048576    30.00    9411.34   
131072  16384 1048576    30.00    9414.27   
131072  16384 1048576    30.01    9313.60   
131072  16384 1048576    30.01    9315.10   
131072  16384 1048576    30.00    9413.23   
131072  16384 1048576    30.01    9285.77   
131072  16384 1048576    30.00    9414.28   
131072  16384 1048576    30.00    9406.39   
131072  16384 1048576    30.01    9343.74   
131072  16384 1048576    30.01    9179.17   
131072  16384 1048576    30.01    9081.18   
131072  16384 1048576    30.00    9412.85   
131072  16384 1048576    30.00    9413.66   
131072  16384 1048576    30.01    9346.16   
131072  16384 1048576    30.00    9410.01   
131072  16384 1048576    30.00    9411.22   

It's not clear why I haven't seen these before but the problem is
unlikely to by related to your patch set.

Michal

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-04-30  6:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-16  1:16 [PATCH net v4 0/2] fix packet stuck problem for lockless qdisc Yunsheng Lin
2021-04-16  1:16 ` [PATCH net v4 1/2] net: sched: " Yunsheng Lin
2021-04-19 15:29   ` Michal Kubecek
2021-04-19 23:55     ` Michal Kubecek
2021-04-20  2:23       ` Yunsheng Lin
2021-04-20 20:34       ` Michal Kubecek
2021-04-21  1:52         ` Yunsheng Lin
2021-04-21  5:31           ` Michal Kubecek
2021-04-21  8:21             ` Yunsheng Lin
2021-04-21  8:44               ` Michal Kubecek
2021-04-21  9:25                 ` Yunsheng Lin
2021-04-23  9:42                   ` Yunsheng Lin
2021-04-30  3:11                     ` Yunsheng Lin
2021-04-30  3:15                       ` Yunsheng Lin
2021-04-30  6:28                         ` Michal Kubecek
2021-04-16  1:16 ` [PATCH net v4 2/2] net: sched: fix endless tx action reschedule during deactivation Yunsheng Lin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).