All of lore.kernel.org
 help / color / mirror / Atom feed
From: Cong Wang <xiyou.wangcong@gmail.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"David Miller" <davem@davemloft.net>,
	"Vladimir Oltean" <olteanv@gmail.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Andrii Nakryiko" <andriin@fb.com>,
	"Eric Dumazet" <edumazet@google.com>,
	"Wei Wang" <weiwan@google.com>,
	"Cong Wang ." <cong.wang@bytedance.com>,
	"Taehee Yoo" <ap420073@gmail.com>,
	"Linux Kernel Network Developers" <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linuxarm@openeuler.org, "Marc Kleine-Budde" <mkl@pengutronix.de>,
	linux-can@vger.kernel.org
Subject: Re: [Linuxarm] Re: [RFC v2] net: sched: implement TCQ_F_CAN_BYPASS for lockless qdisc
Date: Tue, 23 Mar 2021 18:49:06 -0700	[thread overview]
Message-ID: <CAM_iQpVgARDaUd3jdvSA11j=Q_K6KvcKfn7DQavGYXUWmvLZtw@mail.gmail.com> (raw)
In-Reply-To: <9d045462-051e-0cde-24d0-349dd397e2b7@huawei.com>

On Sun, Mar 21, 2021 at 5:55 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2021/3/20 2:15, Cong Wang wrote:
> > On Thu, Mar 18, 2021 at 12:33 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2021/3/17 21:45, Jason A. Donenfeld wrote:
> >>> On 3/17/21, Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> >>>> Cong Wang <xiyou.wangcong@gmail.com> writes:
> >>>>
> >>>>> On Mon, Mar 15, 2021 at 2:07 PM Jakub Kicinski <kuba@kernel.org> wrote:
> >>>>>>
> >>>>>> I thought pfifo was supposed to be "lockless" and this change
> >>>>>> re-introduces a lock between producer and consumer, no?
> >>>>>
> >>>>> It has never been truly lockless, it uses two spinlocks in the ring
> >>>>> buffer
> >>>>> implementation, and it introduced a q->seqlock recently, with this patch
> >>>>> now we have priv->lock, 4 locks in total. So our "lockless" qdisc ends
> >>>>> up having more locks than others. ;) I don't think we are going to a
> >>>>> right direction...
> >>>>
> >>>> Just a thought, have you guys considered adopting the lockless MSPC ring
> >>>> buffer recently introduced into Wireguard in commit:
> >>>>
> >>>> 8b5553ace83c ("wireguard: queueing: get rid of per-peer ring buffers")
> >>>>
> >>>> Jason indicated he was willing to work on generalising it into a
> >>>> reusable library if there was a use case for it. I haven't quite though
> >>>> through the details of whether this would be such a use case, but
> >>>> figured I'd at least mention it :)
> >>>
> >>> That offer definitely still stands. Generalization sounds like a lot of fun.
> >>>
> >>> Keep in mind though that it's an eventually consistent queue, not an
> >>> immediately consistent one, so that might not match all use cases. It
> >>> works with wg because we always trigger the reader thread anew when it
> >>> finishes, but that doesn't apply to everyone's queueing setup.
> >>
> >> Thanks for mentioning this.
> >>
> >> "multi-producer, single-consumer" seems to match the lockless qdisc's
> >> paradigm too, for now concurrent enqueuing/dequeuing to the pfifo_fast's
> >> queues() is not allowed, it is protected by producer_lock or consumer_lock.
> >>
> >> So it would be good to has lockless concurrent enqueuing, while dequeuing
> >> can be protected by qdisc_lock() or q->seqlock, which meets the "multi-producer,
> >> single-consumer" paradigm.
> >
> > I don't think so. Usually we have one queue for each CPU so we can expect
> > each CPU has a lockless qdisc assigned, but we can not assume this in
> > the code, so we still have to deal with multiple CPU's sharing a lockless qdisc,
> > and we usually enqueue and dequeue in process context, so it means we could
> > have multiple producers and multiple consumers.
>
> For lockless qdisc, dequeuing is always within the qdisc_run_begin() and
> qdisc_run_end(), so multiple consumers is protected with each other by
> q->seqlock .

So are you saying you will never go lockless for lockless qdisc? I thought
you really want to go lockless with Jason's proposal of MPMC ring buffer
code.

>
> For enqueuing, multiple consumers is protected by producer_lock, see
> pfifo_fast_enqueue() -> skb_array_produce() -> ptr_ring_produce().

I think you seriously misunderstand how we classify MPMC or MPSC,
it is not about how we lock them, it is about whether we truly have
a single or multiple consumers regardless of locks used, because the
goal is to go lockless.

> I am not sure if lockless MSPC can work with the process context, but
> even if not, the enqueuing is also protected by rcu_read_lock_bh(),
> which provides some kind of atomicity, so that producer_lock can be
> reomved when lockless MSPC is used.


Not sure if I can even understand what you are saying here, Jason's
code only disables preemption with busy wait, I can't see why it can
not be used in the process context.

Thanks.

  reply	other threads:[~2021-03-24  1:50 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-13  2:47 [PATCH RFC] net: sched: implement TCQ_F_CAN_BYPASS for lockless qdisc Yunsheng Lin
2021-03-14  0:03 ` Vladimir Oltean
2021-03-14 10:15   ` Marc Kleine-Budde
2021-03-15  0:50     ` Yunsheng Lin
2021-03-15  3:10 ` [RFC v2] " Yunsheng Lin
2021-03-15 12:29   ` Vladimir Oltean
2021-03-15 13:09   ` Marc Kleine-Budde
2021-03-15 18:53   ` Jakub Kicinski
2021-03-16  0:35     ` Yunsheng Lin
2021-03-16  3:47       ` [Linuxarm] " Yunsheng Lin
2021-03-16  8:15       ` Eric Dumazet
2021-03-16 12:36         ` Yunsheng Lin
2021-03-16 22:48     ` Cong Wang
2021-03-17  1:14       ` Yunsheng Lin
2021-03-17 13:35       ` Toke Høiland-Jørgensen
2021-03-17 13:45         ` Jason A. Donenfeld
2021-03-18  7:33           ` [Linuxarm] " Yunsheng Lin
2021-03-19 18:15             ` Cong Wang
2021-03-22  0:55               ` Yunsheng Lin
2021-03-24  1:49                 ` Cong Wang [this message]
2021-03-24  2:36                   ` Yunsheng Lin
2021-03-19 19:03             ` Jason A. Donenfeld
2021-03-22  1:05               ` Yunsheng Lin
2021-03-18  7:10   ` Ahmad Fatoum
2021-03-18  7:46     ` Yunsheng Lin
2021-03-18  9:09       ` Ahmad Fatoum

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAM_iQpVgARDaUd3jdvSA11j=Q_K6KvcKfn7DQavGYXUWmvLZtw@mail.gmail.com' \
    --to=xiyou.wangcong@gmail.com \
    --cc=Jason@zx2c4.com \
    --cc=andriin@fb.com \
    --cc=ap420073@gmail.com \
    --cc=ast@kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-can@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=linyunsheng@huawei.com \
    --cc=mkl@pengutronix.de \
    --cc=netdev@vger.kernel.org \
    --cc=olteanv@gmail.com \
    --cc=toke@redhat.com \
    --cc=weiwan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.