From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Ricardo Leitner Subject: Re: [PATCH net-next,v3 00/12] add flow_rule infrastructure Date: Thu, 22 Nov 2018 19:08:32 -0200 Message-ID: <20181122210832.GD14375@localhost.localdomain> References: <20181121025132.14305-1-pablo@netfilter.org> <20181122162220.GB8353@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: netdev@vger.kernel.org, davem@davemloft.net, thomas.lendacky@amd.com, f.fainelli@gmail.com, ariel.elior@cavium.com, michael.chan@broadcom.com, santosh@chelsio.com, madalin.bucur@nxp.com, yisen.zhuang@huawei.com, salil.mehta@huawei.com, jeffrey.t.kirsher@intel.com, tariqt@mellanox.com, saeedm@mellanox.com, jiri@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, peppe.cavallaro@st.com, grygorii.strashko@ti.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, alexandre.torgue@st.com, joabreu@synopsys.com, linux-net-drivers@solarflare.com, ganeshgr@chelsio.com, ogerlitz@mellanox.com, Manish.Chopra@cavium.com To: Pablo Neira Ayuso Return-path: Received: from mail-qk1-f194.google.com ([209.85.222.194]:42376 "EHLO mail-qk1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389465AbeKWHtp (ORCPT ); Fri, 23 Nov 2018 02:49:45 -0500 Received: by mail-qk1-f194.google.com with SMTP id m5so7475658qka.9 for ; Thu, 22 Nov 2018 13:08:37 -0800 (PST) Content-Disposition: inline In-Reply-To: <20181122162220.GB8353@localhost.localdomain> Sender: netdev-owner@vger.kernel.org List-ID: On Thu, Nov 22, 2018 at 02:22:20PM -0200, Marcelo Ricardo Leitner wrote: > On Wed, Nov 21, 2018 at 03:51:20AM +0100, Pablo Neira Ayuso wrote: > > Hi, > > > > This patchset is the third iteration [1] [2] [3] to introduce a kernel > > intermediate (IR) to express ACL hardware offloads. > > On v2 cover letter you had: > > """ > However, cost of this layer is very small, adding 1 million rules via > tc -batch, perf shows: > > 0.06% tc [kernel.vmlinux] [k] tc_setup_flow_action > """ > > The above doesn't include time spent on children calls and I'm worried > about the new allocation done by flow_rule_alloc(), as it can impact > rule insertion rate. I'll run some tests here and report back. I'm seeing +60ms on 1.75s (~3.4%) to add 40k flower rules on ingress with skip_hw and tc in batch mode, with flows like: filter add dev p6p2 parent ffff: protocol ip prio 1 flower skip_hw src_mac ec:13:db:00:00:00 dst_mac ec:14:c2:00:00:00 src_ip 56.0.0.0 dst_ip 55.0.0.0 action drop Only 20ms out of those 60ms were consumed within fl_change() calls (considering children calls), though. Do you see something similar? I used current net-next (d59da3fbfe3f) and with this patchset applied.