netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlad Buslov <vladbu@nvidia.com>
To: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Simon Horman <simon.horman@corigine.com>,
	David Miller <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Cong Wang <xiyou.wangcong@gmail.com>,
	Jiri Pirko <jiri@mellanox.com>, <netdev@vger.kernel.org>,
	<oss-drivers@corigine.com>,
	Baowen Zheng <baowen.zheng@corigine.com>,
	Louis Peens <louis.peens@corigine.com>
Subject: Re: [PATCH net-next 1/3] flow_offload: allow user to offload tc action to net device
Date: Tue, 27 Jul 2021 19:47:43 +0300	[thread overview]
Message-ID: <ygnh4kcfr9e8.fsf@nvidia.com> (raw)
In-Reply-To: <95d6873c-256c-0462-60f7-56dbffb8221b@mojatatu.com>

On Tue 27 Jul 2021 at 19:13, Jamal Hadi Salim <jhs@mojatatu.com> wrote:
> On 2021-07-27 10:38 a.m., Vlad Buslov wrote:
>> On Tue 27 Jul 2021 at 16:04, Simon Horman <simon.horman@corigine.com> wrote:
>
>>>>
>>>> Also showing a tc command line in the cover letter on how one would
>>>> ask for a specific action to be offloaded.
>>>
>>> In practice actions are offloaded when a flow using them is offloaded.
>>> So I think we need to consider what the meaning of IN_HW is.
>>>
>>> Is it that:
>>>
>>> * The driver (and potentially hardware, though not in our current
>>>    implementation) has accepted the action for offload;
>>> * That a classifier that uses the action has bee offloaded;
>>> * Or something else?
>> I think we have the same issue with filters - they might not be in
>> hardware after driver callback returned "success" (due to neigh state
>> being invalid for tunnel_key encap, for example).
>> 
>
> Sounds like we need another state for this. Otherwise, how do you debug
> that something is sitting in the driver and not in hardware after you
> issued a command to offload it? How do i tell today?
> Also knowing reason why something is sitting in the driver would be
> helpful.

It is not about just adding another state. The issue is that there is no
way for drivers to change the state of software filter dynamically.

>
>>> With regards to a counter, I'm not quite sure what this would be:
>>>
>>> * The number of devices where the action has been offloaded (which ties
>>>    into the question of what we mean by IN_HW)
>>> * The number of offloaded classifier instances using the action
>>> * Something else
>> I would prefer to have semantics similar to filters:
>> 1. Count number of driver callbacks that returned "success".
>> 2. If count > 0, then set in_hw flag.
>> 3. Set in_hw_count to success count.
>> This would allow user to immediately determine whether action passed
>> driver validation.
>>
>
> I didnt follow this:
> Are we refering to the the "block" semantics (where a filter for
> example applies to multiple devices)?

This uses indirect offload infrastructure, which means all drivers
in flow_block_indr_dev_list will receive action offload requests.

>
>>>
>>> Regarding a flag to control offload:
>>>
>>> * For classifiers (at least the flower classifier) there is the skip_sw and
>>>    skip_hw flags, which allow control of placement of a classifier in SW and
>>>    HW.
>>> * We could add similar flags for actions, which at least in my
>>>    world view would have the net-effect of controlling which classifiers can
>>>    be added to sw and hw - f.e. a classifier that uses an action marked
>>>    skip_hw could not be added to HW.
>
> I guess it depends on the hardware implementation.
> In S/W we have two modes:
> Approach A: create an action and then 2) bind it to a filter.
> Approach B: Create a filter and then bind it to an action.
>
> And #2A can be repeated multiple times for the same action
> (would require some index as a reference for the action)
> To Simon's comment above that would mean allowing
> "a classifier that uses an action marked skip_hw to be added to HW"
> i.e
> Some hardware is capable of doing both option #A and #B.
>
> Todays offload assumes #B - in which both filter and action are assumed
> offloaded.
>
> I am hoping whatever approach we end up agreeing on doesnt limit
> either mode.
>
>>> * Doing so would add some extra complexity and its not immediately apparent
>>>    to me what the use-case would be given that there are already flags for
>>>    classifiers.
>> Yeah, adding such flag for action offload seems to complicate things.
>> Also, "skip_sw" flag doesn't even make much sense for actions. I thought
>> that "skip_hw" flag would be nice to have for users that would like to
>> avoid "spamming" their NIC drivers (potentially causing higher latency
>> and resource consumption) for filters/actions they have no intention to
>> offload to hardware, but I'm not sure how useful is that option really
>> is.
>
> Hold on Vlad.
> So you are looking at this mostly as an optimization to speed up h/w
> control updates? ;->

No. How would adding more flags improve h/w update rate? I was just
thinking that it is strange that users that are not interested in
offloads would suddenly have higher memory usage for their actions just
because they happen to have offload-capable driver loaded. But it is not
a major concern for me.

>
> I was looking at it more as a (currently missing) feature improvement.
> We already have a use case that is implemented by s/w today. The feature
> mimics it in h/w.
>
> At minimal all existing NICs should be able to support the counters
> as mapped to simple actions like drop. I understand for example if some
> cant support adding separately offloading of tunnels for example.
> So the syntax is something along the lines of:
>
> tc actions add action drop index 15 skip_sw
> tc filter add dev ...parent ... protocol ip prio X ..\
> u32/flower skip_sw match ... flowid 1:10 action gact index 15
>
> You get an error if counter index 15 is not offloaded or
> if skip_sw was left out..
>
> And then later on, if you support sharing of actions:
> tc filter add dev ...parent ... protocol ip prio X2 ..\
> u32/flower skip_sw match ... flowid 1:10 action gact index 15
>
> cheers,
> jamal


  reply	other threads:[~2021-07-27 16:47 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-22  9:19 [PATCH net-next 0/3] flow_offload: hardware offload of TC actions Simon Horman
2021-07-22  9:19 ` [PATCH net-next 1/3] flow_offload: allow user to offload tc action to net device Simon Horman
2021-07-22 12:24   ` Roi Dayan
2021-07-22 13:19     ` Simon Horman
2021-07-22 13:29   ` Vlad Buslov
2021-07-22 13:33     ` Jamal Hadi Salim
2021-07-27 13:04       ` Simon Horman
2021-07-27 14:38         ` Vlad Buslov
2021-07-27 16:13           ` Jamal Hadi Salim
2021-07-27 16:47             ` Vlad Buslov [this message]
2021-07-28  7:46               ` Simon Horman
2021-07-28  8:05                 ` Vlad Buslov
2021-07-28 13:51                 ` Jamal Hadi Salim
2021-07-28 14:46                   ` Simon Horman
2021-07-30 10:17                     ` Jamal Hadi Salim
2021-07-30 11:40                       ` Vlad Buslov
2021-08-03  9:57                         ` Jamal Hadi Salim
2021-08-03 12:02                           ` tc offload debug-ability Jamal Hadi Salim
2021-08-03 12:14                             ` Vlad Buslov
2021-08-03 12:50                               ` Jamal Hadi Salim
2021-08-03 13:34                                 ` Ido Schimmel
2021-07-30 13:20                       ` [PATCH net-next 1/3] flow_offload: allow user to offload tc action to net device Simon Horman
2021-08-03 10:14                         ` Jamal Hadi Salim
2021-08-03 11:36                           ` Simon Horman
2021-08-03 11:45                             ` Jamal Hadi Salim
2021-08-03 12:31                               ` Simon Horman
2021-08-03 13:01                                 ` Jamal Hadi Salim
2021-08-03 14:46                                   ` Simon Horman
2021-07-22 13:57   ` kernel test robot
2021-07-22 15:31   ` kernel test robot
2021-08-03 10:50   ` Jamal Hadi Salim
2021-08-03 11:05   ` Jamal Hadi Salim
2021-08-03 11:31     ` Simon Horman
2021-07-22  9:19 ` [PATCH net-next 2/3] flow_offload: add process to delete offloaded actions from " Simon Horman
2021-07-22 14:25   ` Vlad Buslov
2021-07-22 14:50   ` kernel test robot
2021-07-22 17:07   ` kernel test robot
2021-08-03 10:59   ` Jamal Hadi Salim
2021-07-22  9:19 ` [PATCH net-next 3/3] flow_offload: add process to update action stats from hardware Simon Horman
2021-07-22 14:55   ` Vlad Buslov
2021-08-03 11:24   ` Jamal Hadi Salim
2021-08-03 11:35     ` Simon Horman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ygnh4kcfr9e8.fsf@nvidia.com \
    --to=vladbu@nvidia.com \
    --cc=baowen.zheng@corigine.com \
    --cc=davem@davemloft.net \
    --cc=jhs@mojatatu.com \
    --cc=jiri@mellanox.com \
    --cc=kuba@kernel.org \
    --cc=louis.peens@corigine.com \
    --cc=netdev@vger.kernel.org \
    --cc=oss-drivers@corigine.com \
    --cc=simon.horman@corigine.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).