netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Borkmann <daniel@iogearbox.net>
To: Amery Hung <ameryhung@gmail.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>,
	Stanislav Fomichev <sdf@google.com>,
	netdev@vger.kernel.org, bpf@vger.kernel.org,
	yangpeihao@sjtu.edu.cn, toke@redhat.com, jiri@resnulli.us,
	xiyou.wangcong@gmail.com, yepeilin.cs@gmail.com
Subject: Re: [RFC PATCH v7 0/8] net_sched: Introduce eBPF based Qdisc
Date: Thu, 25 Jan 2024 12:57:24 +0100	[thread overview]
Message-ID: <8f270943-637f-5399-be04-82fdbef4a648@iogearbox.net> (raw)
In-Reply-To: <CAMB2axPEO+JU36mhwp=-9FdsCsNRObbou6-YnMJnAr+A8PNwrA@mail.gmail.com>

On 1/24/24 10:26 PM, Amery Hung wrote:
> On Wed, Jan 24, 2024 at 7:27 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 1/24/24 3:11 PM, Jamal Hadi Salim wrote:
>>> On Wed, Jan 24, 2024 at 8:08 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>> On 1/24/24 1:09 PM, Jamal Hadi Salim wrote:
>>>>> On Tue, Jan 23, 2024 at 4:13 PM Stanislav Fomichev <sdf@google.com> wrote:
>>>>>> On 01/17, Amery Hung wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am continuing the work of ebpf-based Qdisc based on Cong’s previous
>>>>>>> RFC. The followings are some use cases of eBPF Qdisc:
>>>>>>>
>>>>>>> 1. Allow customizing Qdiscs in an easier way. So that people don't
>>>>>>>       have to write a complete Qdisc kernel module just to experiment
>>>>>>>       some new queuing theory.
>>>>>>>
>>>>>>> 2. Solve EDT's problem. EDT calcuates the "tokens" in clsact which
>>>>>>>       is before enqueue, it is impossible to adjust those "tokens" after
>>>>>>>       packets get dropped in enqueue. With eBPF Qdisc, it is easy to
>>>>>>>       be solved with a shared map between clsact and sch_bpf.
>>>>>>>
>>>>>>> 3. Replace qevents, as now the user gains much more control over the
>>>>>>>       skb and queues.
>>>>>>>
>>>>>>> 4. Provide a new way to reuse TC filters. Currently TC relies on filter
>>>>>>>       chain and block to reuse the TC filters, but they are too complicated
>>>>>>>       to understand. With eBPF helper bpf_skb_tc_classify(), we can invoke
>>>>>>>       TC filters on _any_ Qdisc (even on a different netdev) to do the
>>>>>>>       classification.
>>>>>>>
>>>>>>> 5. Potentially pave a way for ingress to queue packets, although
>>>>>>>       current implementation is still only for egress.
>>>>>>>
>>>>>>> I’ve combed through previous comments and appreciated the feedbacks.
>>>>>>> Some major changes in this RFC is the use of kptr to skb to maintain
>>>>>>> the validility of skb during its lifetime in the Qdisc, dropping rbtree
>>>>>>> maps, and the inclusion of two examples.
>>>>>>>
>>>>>>> Some questions for discussion:
>>>>>>>
>>>>>>> 1. We now pass a trusted kptr of sk_buff to the program instead of
>>>>>>>       __sk_buff. This makes most helpers using __sk_buff incompatible
>>>>>>>       with eBPF qdisc. An alternative is to still use __sk_buff in the
>>>>>>>       context and use bpf_cast_to_kern_ctx() to acquire the kptr. However,
>>>>>>>       this can only be applied to enqueue program, since in dequeue program
>>>>>>>       skbs do not come from ctx but kptrs exchanged out of maps (i.e., there
>>>>>>>       is no __sk_buff). Any suggestion for making skb kptr and helper
>>>>>>>       functions compatible?
>>>>>>>
>>>>>>> 2. The current patchset uses netlink. Do we also want to use bpf_link
>>>>>>>       for attachment?
>>>>>>
>>>>>> [..]
>>>>>>
>>>>>>> 3. People have suggested struct_ops. We chose not to use struct_ops since
>>>>>>>       users might want to create multiple bpf qdiscs with different
>>>>>>>       implementations. Current struct_ops attachment model does not seem
>>>>>>>       to support replacing only functions of a specific instance of a module,
>>>>>>>       but I might be wrong.
>>>>>>
>>>>>> I still feel like it deserves at leasta try. Maybe we can find some potential
>>>>>> path where struct_ops can allow different implementations (Martin probably
>>>>>> has some ideas about that). I looked at the bpf qdisc itself and it doesn't
>>>>>> really have anything complicated (besides trying to play nicely with other
>>>>>> tc classes/actions, but I'm not sure how relevant that is).
>>>>>
>>>>> Are you suggesting that it is a nuisance to integrate with the
>>>>> existing infra? I would consider it being a lot more than "trying to
>>>>> play nicely". Besides, it's a kfunc and people will not be forced to
>>>>> use it.
>>>>
>>>> What's the use case?
>>>
>>> What's the use case for enabling existing infra to work? Sure, let's
>>> rewrite everything from scratch in ebpf. And then introduce new
>>> tooling which well funded companies are capable of owning the right
>>> resources to build and manage. Open source is about choices and
>>> sharing and this is about choices and sharing.
>>>
>>>> If you already go that route to implement your own
>>>> qdisc, why is there a need to take the performane hit and go all the
>>>> way into old style cls/act infra when it can be done in a more straight
>>>> forward way natively?
>>>
>>> Who is forcing you to use the kfunc? This is about choice.
>>> What is ebpf these days anyways? Is it a) a programming environment or
>>> b) is it the only way to do things? I see it as available infra i.e #a
>>> not as the answer looking for a question.  IOW, as something we can
>>> use to build the infra we need and use kfuncs when it makes sense. Not
>>> everybody has infinite resources to keep hacking things into ebpf.
>>>
>>>> For the vast majority of cases this will be some
>>>> very lightweight classification anyway (if not outsourced to tc egress
>>>> given the lock). If there is a concrete production need, it could be
>>>> added, otherwise if there is no immediate use case which cannot be solved
>>>> otherwise I would not add unnecessary kfuncs.
>>>
>>> "Unnecessary" is really your view.
>>
>> Looks like we're talking past each other? If there is no plan to use it
>> in production (I assume Amery would be able to answer?), why add it right
>> now to the initial series, only to figure out later on (worst case in
>> few years) when the time comes that the kfunc does not fit the actual
>> need? You've probably seen the life cycle doc (Documentation/bpf/kfuncs.rst)
>> and while changes can be made, they should still be mindful about potential
>> breakages the longer it's out in the wild, hence my question if it's
>> planning to be used given it wasn't in the samples.
> 
> We would like to reuse existing TC filters. Like Jamal says, changing
> filter rules in production can be done easily with existing tooling.
> Besides, when the user is only interested in exploring scheduling
> algorithms but not classifying traffics, they don't need to replicate
> the filter again in bpf. I can add bpf_skb_tc_classify() test cases in
> the next series if that helps.

In that case, please add a BPF selftest for exercising the kfunc, and also
expand the commit description with the above rationale.

Thanks,
Daniel

      reply	other threads:[~2024-01-25 11:57 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-17 21:56 [RFC PATCH v7 0/8] net_sched: Introduce eBPF based Qdisc Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 1/8] " Amery Hung
2024-01-23 23:51   ` Martin KaFai Lau
2024-01-24  5:22     ` Amery Hung
2024-01-26  2:22       ` Martin KaFai Lau
2024-01-27  1:17         ` Amery Hung
2024-01-30  6:39           ` Martin KaFai Lau
2024-01-30 17:49             ` Kui-Feng Lee
2024-01-31  1:01               ` Martin KaFai Lau
2024-01-31 16:49                 ` Kui-Feng Lee
2024-01-31 16:59                   ` Amery Hung
2024-01-31 16:23             ` Amery Hung
2024-02-02  1:47               ` Martin KaFai Lau
2024-02-09 20:14                 ` Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 2/8] net_sched: Add kfuncs for working with skb Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 3/8] net_sched: Introduce kfunc bpf_skb_tc_classify() Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 4/8] net_sched: Add reset program Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 5/8] net_sched: Add init program Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_QDISC Amery Hung
2024-01-23  0:17   ` Andrii Nakryiko
2024-01-23 19:40     ` Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 7/8] samples/bpf: Add an example of bpf fq qdisc Amery Hung
2024-01-24 10:29   ` Daniel Borkmann
2024-01-26 19:49     ` Amery Hung
2024-01-17 21:56 ` [RFC PATCH v7 8/8] samples/bpf: Add an example of bpf netem qdisc Amery Hung
2024-01-23 21:13 ` [RFC PATCH v7 0/8] net_sched: Introduce eBPF based Qdisc Stanislav Fomichev
2024-01-24 10:10   ` Daniel Borkmann
2024-01-24 12:09   ` Jamal Hadi Salim
2024-01-24 13:07     ` Daniel Borkmann
2024-01-24 14:11       ` Jamal Hadi Salim
2024-01-24 15:26         ` Daniel Borkmann
2024-01-24 21:26           ` Amery Hung
2024-01-25 11:57             ` Daniel Borkmann [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8f270943-637f-5399-be04-82fdbef4a648@iogearbox.net \
    --to=daniel@iogearbox.net \
    --cc=ameryhung@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=netdev@vger.kernel.org \
    --cc=sdf@google.com \
    --cc=toke@redhat.com \
    --cc=xiyou.wangcong@gmail.com \
    --cc=yangpeihao@sjtu.edu.cn \
    --cc=yepeilin.cs@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).