From: John Fastabend <john.fastabend@gmail.com> To: Maciej Fijalkowski <maciej.fijalkowski@intel.com>, John Fastabend <john.fastabend@gmail.com> Cc: "Jesper Dangaard Brouer" <jbrouer@redhat.com>, "Daniel Borkmann" <daniel@iogearbox.net>, "Toke Høiland-Jørgensen" <toke@redhat.com>, alardam@gmail.com, magnus.karlsson@intel.com, bjorn.topel@intel.com, andrii.nakryiko@gmail.com, kuba@kernel.org, ast@kernel.org, netdev@vger.kernel.org, davem@davemloft.net, hawk@kernel.org, jonathan.lemon@gmail.com, bpf@vger.kernel.org, jeffrey.t.kirsher@intel.com, maciejromanfijalkowski@gmail.com, intel-wired-lan@lists.osuosl.org, "Marek Majtyka" <marekx.majtyka@intel.com> Subject: Re: [PATCH v2 bpf 1/5] net: ethtool: add xdp properties flag set Date: Tue, 08 Dec 2020 22:03:51 -0800 [thread overview] Message-ID: <5fd068c75b92d_50ce20814@john-XPS-13-9370.notmuch> (raw) In-Reply-To: <20201207230755.GB27205@ranger.igk.intel.com> > On Mon, Dec 07, 2020 at 12:52:22PM -0800, John Fastabend wrote: > > Jesper Dangaard Brouer wrote: > > > On Fri, 4 Dec 2020 16:21:08 +0100 > > > Daniel Borkmann <daniel@iogearbox.net> wrote: [...] pruning the thread to answer Jesper. > > > > > > Use-case(2): Disable XDP_TX on a driver to save hardware TX-queue > > > resources, as the use-case is only DDoS. Today we have this problem > > > with the ixgbe hardware, that cannot load XDP programs on systems with > > > more than 192 CPUs. > > > > The ixgbe issues is just a bug or missing-feature in my opinion. > > Not a bug, rather HW limitation? Well hardware has some max queue limit. Likely <192 otherwise I would have kept doing queue per core on up to 192. But, ideally we should still load and either share queues across multiple cores or restirct down to a subset of CPUs. Do you need 192 cores for a 10gbps nic, probably not. Yes, it requires some extra care, but should be doable if someone cares enough. I gather current limitation/bug is because no one has that configuration and/or has complained loud enough. > > > > > I think we just document that XDP_TX consumes resources and if users > > care they shouldn't use XD_TX in programs and in that case hardware > > should via program discovery not allocate the resource. This seems > > cleaner in my opinion then more bits for features. > > But what if I'm with some limited HW that actually has a support for XDP > and I would like to utilize XDP_TX? > > Not all drivers that support XDP consume Tx resources. Recently igb got > support and it shares Tx queues between netstack and XDP. Makes sense to me. > > I feel like we should have a sort-of best effort approach in case we > stumble upon the XDP_TX in prog being loaded and query the driver if it > would be able to provide the Tx resources on the current system, given > that normally we tend to have a queue per core. Why do we need to query? I guess you want some indication from the driver its not going to be running in the ideal NIC configuraition? I guess printing a warning would be the normal way to show that. But, maybe your point is you want something easier to query? > > In that case igb would say yes, ixgbe would say no and prog would be > rejected. I think the driver should load even if it can't meet the queue per core quota. Refusing to load at all or just dropping packets on the floor is not very friendly. I think we agree on that point.
WARNING: multiple messages have this Message-ID (diff)
From: John Fastabend <john.fastabend@gmail.com> To: intel-wired-lan@osuosl.org Subject: [Intel-wired-lan] [PATCH v2 bpf 1/5] net: ethtool: add xdp properties flag set Date: Tue, 08 Dec 2020 22:03:51 -0800 [thread overview] Message-ID: <5fd068c75b92d_50ce20814@john-XPS-13-9370.notmuch> (raw) In-Reply-To: <20201207230755.GB27205@ranger.igk.intel.com> > On Mon, Dec 07, 2020 at 12:52:22PM -0800, John Fastabend wrote: > > Jesper Dangaard Brouer wrote: > > > On Fri, 4 Dec 2020 16:21:08 +0100 > > > Daniel Borkmann <daniel@iogearbox.net> wrote: [...] pruning the thread to answer Jesper. > > > > > > Use-case(2): Disable XDP_TX on a driver to save hardware TX-queue > > > resources, as the use-case is only DDoS. Today we have this problem > > > with the ixgbe hardware, that cannot load XDP programs on systems with > > > more than 192 CPUs. > > > > The ixgbe issues is just a bug or missing-feature in my opinion. > > Not a bug, rather HW limitation? Well hardware has some max queue limit. Likely <192 otherwise I would have kept doing queue per core on up to 192. But, ideally we should still load and either share queues across multiple cores or restirct down to a subset of CPUs. Do you need 192 cores for a 10gbps nic, probably not. Yes, it requires some extra care, but should be doable if someone cares enough. I gather current limitation/bug is because no one has that configuration and/or has complained loud enough. > > > > > I think we just document that XDP_TX consumes resources and if users > > care they shouldn't use XD_TX in programs and in that case hardware > > should via program discovery not allocate the resource. This seems > > cleaner in my opinion then more bits for features. > > But what if I'm with some limited HW that actually has a support for XDP > and I would like to utilize XDP_TX? > > Not all drivers that support XDP consume Tx resources. Recently igb got > support and it shares Tx queues between netstack and XDP. Makes sense to me. > > I feel like we should have a sort-of best effort approach in case we > stumble upon the XDP_TX in prog being loaded and query the driver if it > would be able to provide the Tx resources on the current system, given > that normally we tend to have a queue per core. Why do we need to query? I guess you want some indication from the driver its not going to be running in the ideal NIC configuraition? I guess printing a warning would be the normal way to show that. But, maybe your point is you want something easier to query? > > In that case igb would say yes, ixgbe would say no and prog would be > rejected. I think the driver should load even if it can't meet the queue per core quota. Refusing to load at all or just dropping packets on the floor is not very friendly. I think we agree on that point.
next prev parent reply other threads:[~2020-12-09 6:04 UTC|newest] Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-12-04 10:28 [PATCH v2 bpf 0/5] New netdev feature flags for XDP alardam 2020-12-04 10:28 ` [Intel-wired-lan] " alardam 2020-12-04 10:28 ` [PATCH v2 bpf 1/5] net: ethtool: add xdp properties flag set alardam 2020-12-04 10:28 ` [Intel-wired-lan] " alardam 2020-12-04 12:18 ` Toke Høiland-Jørgensen 2020-12-04 12:18 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-04 12:46 ` Maciej Fijalkowski 2020-12-04 12:46 ` [Intel-wired-lan] " Maciej Fijalkowski 2020-12-04 15:21 ` Daniel Borkmann 2020-12-04 15:21 ` [Intel-wired-lan] " Daniel Borkmann 2020-12-04 17:20 ` Toke Høiland-Jørgensen 2020-12-04 17:20 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-04 22:19 ` Daniel Borkmann 2020-12-04 22:19 ` [Intel-wired-lan] " Daniel Borkmann 2020-12-07 11:54 ` Jesper Dangaard Brouer 2020-12-07 11:54 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-12-07 12:08 ` Toke Høiland-Jørgensen 2020-12-07 12:08 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-07 12:03 ` Toke Høiland-Jørgensen 2020-12-07 12:03 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-07 12:54 ` Jesper Dangaard Brouer 2020-12-07 12:54 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-12-07 20:52 ` John Fastabend 2020-12-07 20:52 ` [Intel-wired-lan] " John Fastabend 2020-12-07 22:38 ` Saeed Mahameed 2020-12-07 22:38 ` [Intel-wired-lan] " Saeed Mahameed 2020-12-07 23:07 ` Maciej Fijalkowski 2020-12-07 23:07 ` [Intel-wired-lan] " Maciej Fijalkowski 2020-12-09 6:03 ` John Fastabend [this message] 2020-12-09 6:03 ` John Fastabend 2020-12-09 9:54 ` Maciej Fijalkowski 2020-12-09 9:54 ` [Intel-wired-lan] " Maciej Fijalkowski 2020-12-09 11:52 ` Jesper Dangaard Brouer 2020-12-09 11:52 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-12-09 15:41 ` David Ahern 2020-12-09 15:41 ` [Intel-wired-lan] " David Ahern 2020-12-09 17:15 ` Saeed Mahameed 2020-12-09 17:15 ` [Intel-wired-lan] " Saeed Mahameed 2020-12-10 3:34 ` David Ahern 2020-12-10 3:34 ` [Intel-wired-lan] " David Ahern 2020-12-10 6:48 ` Saeed Mahameed 2020-12-10 6:48 ` [Intel-wired-lan] " Saeed Mahameed 2020-12-10 15:30 ` David Ahern 2020-12-10 15:30 ` [Intel-wired-lan] " David Ahern 2020-12-10 18:58 ` Saeed Mahameed 2020-12-10 18:58 ` [Intel-wired-lan] " Saeed Mahameed 2021-01-05 11:56 ` Marek Majtyka 2021-01-05 11:56 ` [Intel-wired-lan] " Marek Majtyka 2021-02-01 16:16 ` Toke Høiland-Jørgensen 2021-02-01 16:16 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2021-02-02 11:26 ` Marek Majtyka 2021-02-02 11:26 ` [Intel-wired-lan] " Marek Majtyka 2021-02-02 12:05 ` Toke Høiland-Jørgensen 2021-02-02 12:05 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2021-02-02 19:34 ` Jakub Kicinski 2021-02-02 19:34 ` [Intel-wired-lan] " Jakub Kicinski 2021-02-03 12:50 ` Marek Majtyka 2021-02-03 12:50 ` [Intel-wired-lan] " Marek Majtyka 2021-02-03 17:02 ` Jakub Kicinski 2021-02-03 17:02 ` [Intel-wired-lan] " Jakub Kicinski 2021-02-10 10:53 ` Toke Høiland-Jørgensen 2021-02-10 10:53 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2021-02-10 18:31 ` Jakub Kicinski 2021-02-10 18:31 ` [Intel-wired-lan] " Jakub Kicinski 2021-02-10 22:52 ` Toke Høiland-Jørgensen 2021-02-10 22:52 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2021-02-12 1:26 ` Jakub Kicinski 2021-02-12 1:26 ` [Intel-wired-lan] " Jakub Kicinski 2021-02-12 2:05 ` Alexei Starovoitov 2021-02-12 2:05 ` [Intel-wired-lan] " Alexei Starovoitov 2021-02-12 7:02 ` Marek Majtyka 2021-02-12 7:02 ` [Intel-wired-lan] " Marek Majtyka 2021-02-16 14:30 ` Toke Høiland-Jørgensen 2021-02-16 14:30 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-09 15:44 ` David Ahern 2020-12-09 15:44 ` [Intel-wired-lan] " David Ahern 2020-12-10 13:32 ` Explaining XDP redirect bulk size design (Was: [PATCH v2 bpf 1/5] net: ethtool: add xdp properties flag set) Jesper Dangaard Brouer 2020-12-10 13:32 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-12-10 14:14 ` Magnus Karlsson 2020-12-10 14:14 ` Magnus Karlsson 2020-12-10 17:30 ` Jesper Dangaard Brouer 2020-12-10 17:30 ` Jesper Dangaard Brouer 2020-12-10 19:20 ` Saeed Mahameed 2020-12-10 19:20 ` [Intel-wired-lan] " Saeed Mahameed 2020-12-08 1:01 ` [PATCH v2 bpf 1/5] net: ethtool: add xdp properties flag set David Ahern 2020-12-08 1:01 ` [Intel-wired-lan] " David Ahern 2020-12-08 8:28 ` Jesper Dangaard Brouer 2020-12-08 8:28 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-12-08 11:58 ` Toke Høiland-Jørgensen 2020-12-08 11:58 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-09 5:50 ` John Fastabend 2020-12-09 5:50 ` [Intel-wired-lan] " John Fastabend 2020-12-09 10:26 ` Toke Høiland-Jørgensen 2020-12-09 10:26 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-08 9:00 ` Jesper Dangaard Brouer 2020-12-08 9:00 ` [Intel-wired-lan] " Jesper Dangaard Brouer 2020-12-08 9:42 ` Daniel Borkmann 2020-12-08 9:42 ` [Intel-wired-lan] " Daniel Borkmann 2020-12-04 12:57 ` Maciej Fijalkowski 2020-12-04 12:57 ` [Intel-wired-lan] " Maciej Fijalkowski 2020-12-04 10:28 ` [PATCH v2 bpf 2/5] drivers/net: turn XDP properties on alardam 2020-12-04 10:28 ` [Intel-wired-lan] " alardam 2020-12-04 12:19 ` Toke Høiland-Jørgensen 2020-12-04 12:19 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-09 19:05 ` kernel test robot 2020-12-09 19:05 ` kernel test robot 2020-12-04 10:28 ` [PATCH v2 bpf 3/5] xsk: add usage of xdp properties flags alardam 2020-12-04 10:28 ` [Intel-wired-lan] " alardam 2020-12-04 10:29 ` [PATCH v2 bpf 4/5] xsk: add check for full support of XDP in bind alardam 2020-12-04 10:29 ` [Intel-wired-lan] " alardam 2020-12-04 10:29 ` [PATCH v2 bpf 5/5] ethtool: provide xdp info with XDP_PROPERTIES_GET alardam 2020-12-04 10:29 ` [Intel-wired-lan] " alardam 2020-12-04 17:20 ` [PATCH v2 bpf 0/5] New netdev feature flags for XDP Jakub Kicinski 2020-12-04 17:20 ` [Intel-wired-lan] " Jakub Kicinski 2020-12-04 17:26 ` Toke Høiland-Jørgensen 2020-12-04 17:26 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?= 2020-12-04 19:22 ` Jakub Kicinski 2020-12-04 19:22 ` [Intel-wired-lan] " Jakub Kicinski 2020-12-07 12:04 ` Toke Høiland-Jørgensen 2020-12-07 12:04 ` [Intel-wired-lan] " Toke =?unknown-8bit?q?H=C3=B8iland-J=C3=B8rgensen?=
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=5fd068c75b92d_50ce20814@john-XPS-13-9370.notmuch \ --to=john.fastabend@gmail.com \ --cc=alardam@gmail.com \ --cc=andrii.nakryiko@gmail.com \ --cc=ast@kernel.org \ --cc=bjorn.topel@intel.com \ --cc=bpf@vger.kernel.org \ --cc=daniel@iogearbox.net \ --cc=davem@davemloft.net \ --cc=hawk@kernel.org \ --cc=intel-wired-lan@lists.osuosl.org \ --cc=jbrouer@redhat.com \ --cc=jeffrey.t.kirsher@intel.com \ --cc=jonathan.lemon@gmail.com \ --cc=kuba@kernel.org \ --cc=maciej.fijalkowski@intel.com \ --cc=maciejromanfijalkowski@gmail.com \ --cc=magnus.karlsson@intel.com \ --cc=marekx.majtyka@intel.com \ --cc=netdev@vger.kernel.org \ --cc=toke@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.