From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Legacy, Allain" Subject: Re: mlx5 flow create/destroy behaviour Date: Wed, 29 Mar 2017 12:29:59 +0000 Message-ID: <70A7408C6E1BFB41B192A929744D8523968F9CBF@ALA-MBC.corp.ad.wrs.com> References: <70A7408C6E1BFB41B192A929744D8523968F8E2F@ALA-MBC.corp.ad.wrs.com> <20170328153602.GC16796@autoinstall.dev.6wind.com> <70A7408C6E1BFB41B192A929744D8523968F92EF@ALA-MBC.corp.ad.wrs.com> <20170329094523.GG16796@autoinstall.dev.6wind.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: "Adrien Mazarguil (adrien.mazarguil@6wind.com)" , "dev@dpdk.org" , "Peters, Matt" To: =?iso-8859-1?Q?N=E9lio_Laranjeiro?= Return-path: Received: from mail.windriver.com (mail.windriver.com [147.11.1.11]) by dpdk.org (Postfix) with ESMTP id 274C8D466 for ; Wed, 29 Mar 2017 14:30:02 +0200 (CEST) In-Reply-To: <20170329094523.GG16796@autoinstall.dev.6wind.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: N=E9lio Laranjeiro [mailto:nelio.laranjeiro@6wind.com] > Sent: Wednesday, March 29, 2017 5:45 AM <...> > > Almost... the only difference is that the ETH pattern also checks for > type=3D0x8100 >=20 > Ethernet type was not supported in DPDK 17.02, it was submitted later in > march [1]. Did you embed the patch in your test? No, but I am using the default eth mask (rte_flow_item_eth_mask) so it look= s like it is accepting any ether type even though I set the vlan type along= with the src+dst. > > > Can you compile in debug mode (by setting > > > CONFIG_RTE_LIBRTE_MLX5_DEBUG to "y")? Then you should have as > many > > > print for the creation rules than the destroyed ones. > > > > I can give that a try. I ran with debug logs enabled and there are no logs coming from the PMD tha= t indicate an error. All create and destroy calls report a successful resu= lt.=20 I modified my test slightly yesterday to try to determine what is happening= . What I found that if I use a smaller number of flows the problem does no= t happen, but as soon as I use 256 flows or greater the problem manifests i= tself. What I mean is: test 1: 1) start 16 flows (16 unique src MAC addresses sending to 16 unique dst = MAC addresses) 2) create flow rules 3) check that all subsequent packets are marked correctly 4) stop traffic 5) destroy all flow rules 6) wait 15 seconds 7) repeat from (1) for 4 iterations. test 2: same as test1 but with 32 flows test 3: same as test1 but with 64 flows test 4: same as test1 but with 128 flows test 5: same as test1 but with 256 flows (this is where the problem starts happe= ning)... it could very well be somewhere closer to 128 but I am stepping up= by powers of 2 so this is the first occurrence.=20 I also modified my test to destroy flow rules in the opposite order that I = created them just in case ordering is an issue but that had no effect.=20 Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax: 613.492.7870 skype: allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5