From: "Björn Töpel" <bjorn.topel@intel.com> To: "Jesper Dangaard Brouer" <brouer@redhat.com>, "Björn Töpel" <bjorn.topel@gmail.com> Cc: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org, magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: Re: [PATCH bpf-next 3/6] xsk: introduce xsk_do_redirect_rx_full() helper Date: Fri, 4 Sep 2020 17:39:17 +0200 [thread overview] Message-ID: <dfa75afc-ceb7-76ce-6ba3-3b89c53f92f3@intel.com> (raw) In-Reply-To: <20200904171143.5868999a@carbon> On 2020-09-04 17:11, Jesper Dangaard Brouer wrote: > On Fri, 4 Sep 2020 15:53:28 +0200 Björn Töpel > <bjorn.topel@gmail.com> wrote: > >> From: Björn Töpel <bjorn.topel@intel.com> >> >> The xsk_do_redirect_rx_full() helper can be used to check if a >> failure of xdp_do_redirect() was due to the AF_XDP socket had a >> full Rx ring. > > This is very AF_XDP specific. I think that the cpumap could likely > benefit from similar approach? e.g. if the cpumap kthread is > scheduled on the same CPU. > At least I thought this was *very* AF_XDP specific, since the kernel is dependent of that userland runs. Allocation (source) and Rx ring (sink). Maybe I was wrong! :-) The thing with AF_XDP zero-copy, is that we sort of assume that if a user enabled that most packets will have XDP_REDIRECT to an AF_XDP socket. > But for cpumap we only want this behavior if sched on the same CPU > as RX-NAPI. This could be "seen" by the cpumap code itself in the > case bq_flush_to_queue() drops packets, check if rcpu->cpu equal > smp_processor_id(). Maybe I'm taking this too far? > Interesting. So, if you're running on the same core, and redirect fail for CPUMAP, you'd like to yield the NAPI loop? Is that really OK from a fairness perspective? I mean, with AF_XDP zero-copy we pretty much know that all actions will be redirect to socket. For CPUMAP type of applications, can that assumption be made? Björn
WARNING: multiple messages have this Message-ID (diff)
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= <bjorn.topel@intel.com> To: intel-wired-lan@osuosl.org Subject: [Intel-wired-lan] [PATCH bpf-next 3/6] xsk: introduce xsk_do_redirect_rx_full() helper Date: Fri, 4 Sep 2020 17:39:17 +0200 [thread overview] Message-ID: <dfa75afc-ceb7-76ce-6ba3-3b89c53f92f3@intel.com> (raw) In-Reply-To: <20200904171143.5868999a@carbon> On 2020-09-04 17:11, Jesper Dangaard Brouer wrote: > On Fri, 4 Sep 2020 15:53:28 +0200 Bj?rn T?pel > <bjorn.topel@gmail.com> wrote: > >> From: Bj?rn T?pel <bjorn.topel@intel.com> >> >> The xsk_do_redirect_rx_full() helper can be used to check if a >> failure of xdp_do_redirect() was due to the AF_XDP socket had a >> full Rx ring. > > This is very AF_XDP specific. I think that the cpumap could likely > benefit from similar approach? e.g. if the cpumap kthread is > scheduled on the same CPU. > At least I thought this was *very* AF_XDP specific, since the kernel is dependent of that userland runs. Allocation (source) and Rx ring (sink). Maybe I was wrong! :-) The thing with AF_XDP zero-copy, is that we sort of assume that if a user enabled that most packets will have XDP_REDIRECT to an AF_XDP socket. > But for cpumap we only want this behavior if sched on the same CPU > as RX-NAPI. This could be "seen" by the cpumap code itself in the > case bq_flush_to_queue() drops packets, check if rcpu->cpu equal > smp_processor_id(). Maybe I'm taking this too far? > Interesting. So, if you're running on the same core, and redirect fail for CPUMAP, you'd like to yield the NAPI loop? Is that really OK from a fairness perspective? I mean, with AF_XDP zero-copy we pretty much know that all actions will be redirect to socket. For CPUMAP type of applications, can that assumption be made? Bj?rn
next prev parent reply other threads:[~2020-09-04 15:39 UTC|newest] Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-09-04 13:53 [PATCH bpf-next 0/6] xsk: exit NAPI loop when AF_XDP Rx ring is full Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 13:53 ` [PATCH bpf-next 1/6] xsk: improve xdp_do_redirect() error codes Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 13:53 ` [PATCH bpf-next 2/6] xdp: introduce xdp_do_redirect_ext() function Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 13:53 ` [PATCH bpf-next 3/6] xsk: introduce xsk_do_redirect_rx_full() helper Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 15:11 ` Jesper Dangaard Brouer 2020-09-04 15:11 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-09-04 15:39 ` Björn Töpel [this message] 2020-09-04 15:39 ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-07 12:45 ` Jesper Dangaard Brouer 2020-09-07 12:45 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-09-04 13:53 ` [PATCH bpf-next 4/6] i40e, xsk: finish napi loop if AF_XDP Rx queue is full Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 13:53 ` [PATCH bpf-next 5/6] ice, " Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 13:53 ` [PATCH bpf-next 6/6] ixgbe, " Björn Töpel 2020-09-04 13:53 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 15:35 ` Jesper Dangaard Brouer 2020-09-04 15:35 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-09-04 15:54 ` Björn Töpel 2020-09-04 15:54 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 13:59 ` [PATCH bpf-next 0/6] xsk: exit NAPI loop when AF_XDP Rx ring " Björn Töpel 2020-09-04 13:59 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-08 10:32 ` Maxim Mikityanskiy 2020-09-08 10:32 ` [Intel-wired-lan] " Maxim Mikityanskiy 2020-09-08 11:37 ` Magnus Karlsson 2020-09-08 11:37 ` [Intel-wired-lan] " Magnus Karlsson 2020-09-08 12:21 ` Björn Töpel 2020-09-08 12:21 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-09 15:37 ` Jesper Dangaard Brouer 2020-09-09 15:37 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-09-04 14:27 ` Jesper Dangaard Brouer 2020-09-04 14:27 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-09-04 14:32 ` Björn Töpel 2020-09-04 14:32 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-04 23:58 ` Jakub Kicinski 2020-09-04 23:58 ` [Intel-wired-lan] " Jakub Kicinski 2020-09-07 13:37 ` Björn Töpel 2020-09-07 13:37 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-07 18:40 ` Jakub Kicinski 2020-09-07 18:40 ` [Intel-wired-lan] " Jakub Kicinski 2020-09-08 6:58 ` Björn Töpel 2020-09-08 6:58 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-08 17:24 ` Jakub Kicinski 2020-09-08 17:24 ` [Intel-wired-lan] " Jakub Kicinski 2020-09-08 18:28 ` Björn Töpel 2020-09-08 18:28 ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= 2020-09-08 18:34 ` Jakub Kicinski 2020-09-08 18:34 ` [Intel-wired-lan] " Jakub Kicinski
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=dfa75afc-ceb7-76ce-6ba3-3b89c53f92f3@intel.com \ --to=bjorn.topel@intel.com \ --cc=ast@kernel.org \ --cc=bjorn.topel@gmail.com \ --cc=bpf@vger.kernel.org \ --cc=brouer@redhat.com \ --cc=daniel@iogearbox.net \ --cc=davem@davemloft.net \ --cc=hawk@kernel.org \ --cc=intel-wired-lan@lists.osuosl.org \ --cc=john.fastabend@gmail.com \ --cc=kuba@kernel.org \ --cc=magnus.karlsson@intel.com \ --cc=netdev@vger.kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.