From: Edward Cree <ecree@solarflare.com>
To: Jiri Pirko <jiri@resnulli.us>
Cc: Paul Blakey <paulb@mellanox.com>,
Saeed Mahameed <saeedm@mellanox.com>,
"Oz Shlomo" <ozsh@mellanox.com>,
Jakub Kicinski <jakub.kicinski@netronome.com>,
Vlad Buslov <vladbu@mellanox.com>,
David Miller <davem@davemloft.net>, <netdev@vger.kernel.org>,
Jiri Pirko <jiri@mellanox.com>, Roi Dayan <roid@mellanox.com>
Subject: Re: [PATCH net-next 0/3] net/sched: act_ct: Add support for specifying tuple offload policy
Date: Mon, 18 May 2020 19:02:51 +0100 [thread overview]
Message-ID: <d5be2555-faf3-7ca0-0c23-f2bf92873621@solarflare.com> (raw)
In-Reply-To: <20200518172542.GE2193@nanopsycho>
On 18/05/2020 18:25, Jiri Pirko wrote:
> Is it worth to have an object just for this particular purpose? In the
> past I was trying to push a tc block object that could be added/removed
> and being used to insert filters w/o being attached to any qdisc. This
> was frowned upon and refused because the existence of block does not
> have any meaning w/o being attached.
A tc action doesn't have any meaning either until it is attached to a
filter. Is the consensus that the 'tc action' API/command set was a
mistake, or am I misunderstanding the objection?
> What you suggest with zone sounds quite similar. More to that, it is
> related only to act_ct. Is it a good idea to have a common object in TC
> which is actually used as internal part of act_ct only?
Well, really it's related as much to flower ct_stateas to act_ct: the
policy numbers control when a conntrack rule (from the zone) gets
offloaded into drivers, thus determining whether a packet (which has
been through an act_ct to make it +trk) is ±est.
It's because it has a scope broader than a single ct action that I'm
resistant to hanging it off act_ct in this way.
Also it still somewhat bothers me that this policy isn't scoped to the
device; I realise that the current implementation of a single flow
table shared by all offloading devices is what forces that, but it
just doesn't seem semantically right that the policy on when to
offload a connection is global across devices with potentially
differing capabilities (e.g. size of their conntrack tables) that
might want different policies.
(And a 'tc ct add dev eth0 zone 1 policy_blah...' would conveniently
give a hook point for callback (1) from my offtopic ramble, that the
driver could use to register for connections in the zone and start
offloading them to hardware, rather than doing it the first time it
sees that zone show up in an act_ct it's offloading. You don't
really want to do the same in the non-device-qualified case because
that could use up HW table space for connections in a zone you're
not offloading any rules for.)
Basically I'm just dreaming of a world where TC does a lot more with
explicit objects that it creates and then references, rather than
drivers having to implicitly create HW objects for things the first
time a rule tries to reference them.
"Is it worth" all these extra objects? Really that depends on how
much simpler the drivers can become as a result; this is the control
path, so programmer time is worth more than machine time, and space
in the programmer's head is worth more than machine RAM ;-)
-ed
next prev parent reply other threads:[~2020-05-18 18:12 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-14 13:48 [PATCH net-next 0/3] net/sched: act_ct: Add support for specifying tuple offload policy Paul Blakey
2020-05-14 13:48 ` [PATCH net-next 1/3] netfilter: flowtable: Control flow offload timeout interval Paul Blakey
2020-05-14 13:48 ` [PATCH net-next 2/3] net/sched: act_ct: Add policy_pkts tuple offload control policy Paul Blakey
2020-05-14 13:48 ` [PATCH net-next 3/3] net/sched: act_ct: Add policy_timeout " Paul Blakey
2020-05-14 14:04 ` [PATCH net-next 0/3] net/sched: act_ct: Add support for specifying tuple offload policy Edward Cree
2020-05-14 14:49 ` Jiri Pirko
2020-05-14 15:28 ` Edward Cree
2020-05-18 16:17 ` Paul Blakey
2020-05-18 16:48 ` Edward Cree
2020-05-18 17:25 ` Jiri Pirko
2020-05-18 18:02 ` Edward Cree [this message]
2020-05-26 9:25 ` Paul Blakey
2020-05-26 16:17 ` Edward Cree
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d5be2555-faf3-7ca0-0c23-f2bf92873621@solarflare.com \
--to=ecree@solarflare.com \
--cc=davem@davemloft.net \
--cc=jakub.kicinski@netronome.com \
--cc=jiri@mellanox.com \
--cc=jiri@resnulli.us \
--cc=netdev@vger.kernel.org \
--cc=ozsh@mellanox.com \
--cc=paulb@mellanox.com \
--cc=roid@mellanox.com \
--cc=saeedm@mellanox.com \
--cc=vladbu@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.