From mboxrd@z Thu Jan 1 00:00:00 1970 From: Adrien Mazarguil Subject: Re: [RFC] Generic flow director/filtering/classification API Date: Mon, 18 Jul 2016 17:00:29 +0200 Message-ID: <20160718150029.GJ7621@6wind.com> References: <20160705181646.GO7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13DEA331@IRSMSX102.ger.corp.intel.com> <20160708130310.GD7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13DEB236@IRSMSX102.ger.corp.intel.com> <20160713200327.GC7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13DEE55F@IRSMSX102.ger.corp.intel.com> <20160715150402.GE7621@6wind.com> <2EF2F5C0CC56984AA024D0B180335FCB13E02938@IRSMSX102.ger.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: "dev@dpdk.org" , Thomas Monjalon , "Zhang, Helin" , "Wu, Jingjing" , Rasesh Mody , Ajit Khaparde , Rahul Lakkireddy , "Lu, Wenzhuo" , Jan Medala , John Daley , "Chen, Jing D" , "Ananyev, Konstantin" , Matej Vido , Alejandro Lucero , Sony Chacko , Jerin Jacob , "De Lara Guarch, Pablo" , Olga Shern , "Chilikin, Andrey" To: "Chandran, Sugesh" Return-path: Received: from mail-wm0-f41.google.com (mail-wm0-f41.google.com [74.125.82.41]) by dpdk.org (Postfix) with ESMTP id C7357370 for ; Mon, 18 Jul 2016 17:00:34 +0200 (CEST) Received: by mail-wm0-f41.google.com with SMTP id f126so107245853wma.1 for ; Mon, 18 Jul 2016 08:00:34 -0700 (PDT) Content-Disposition: inline In-Reply-To: <2EF2F5C0CC56984AA024D0B180335FCB13E02938@IRSMSX102.ger.corp.intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Jul 18, 2016 at 01:26:09PM +0000, Chandran, Sugesh wrote: > Hi Adrien, > Thank you for getting back on this. > Please find my comments below. Hi Sugesh, Same for me, removed again the parts we agree on. [...] > > > > > > > [Sugesh] Another concern is the cost and time of installing > > > > > > > these rules in the hardware. Can we make these APIs time > > > > > > > bound(or at least an option > > > > > > to > > > > > > > set the time limit to execute these APIs), so that Applicat= ion > > > > > > > doesn=E2=80=99t have to wait so long when installing and de= leting > > > > > > > flows > > > > > > with > > > > > > > slow hardware/NIC. What do you think? Most of the datapath > > > > > > > flow > > > > > > installations are > > > > > > > dynamic and triggered only when there is an ingress traffic= . > > > > > > > Delay in flow insertion/deletion have unpredictable > > > > > > consequences. > > > > > > > > > > > > This API is (currently) aimed at the control path only, and m= ust > > > > > > indeed be assumed to be slow. Creating million of rules may t= ake > > > > > > quite long as it may involve syscalls and other time-consumin= g > > > > > > synchronization things on the PMD side. > > > > > > > > > > > > So currently there is no plan to have rules added from the da= ta > > > > > > path with time constraints. I think it would be implemented > > > > > > through a different set of functions anyway. > > > > > > > > > > > > I do not think adding time limits is practical, even specifyi= ng > > > > > > in the API that creating a single flow rule must take less th= an > > > > > > a maximum number of seconds in order to be effective is too m= uch > > > > > > of a constraint (applications that create all flows during in= it > > > > > > may not care after > > > > all). > > > > > > > > > > > > You should consider in any case that modifying flow rules wil= l > > > > > > always be slower than receiving packets, there is no way arou= nd > > > > > > that. Applications have to live with it and provide a softwar= e > > > > > > fallback for incoming packets while managing flow rules. > > > > > > > > > > > > Moreover, think about what happens when you hit the maximum > > > > number > > > > > > of flow rules and cannot create any more. Applications need t= o > > > > > > implement some kind of fallback in their data path. > > > > > > > > > > > > Offloading flows in HW is also only useful if they live much > > > > > > longer than the time taken to create and delete them. Perhaps > > > > > > applications may choose to do so after detecting long lived > > > > > > flows such as TCP sessions. > > > > > > > > > > > > You may have one separate control thread dedicated to manage > > > > > > flows and keep your normal control thread unaffected by delay= s. > > > > > > Several threads can even be dedicated, one per device. > > > > > [Sugesh] I agree that the flow insertion cannot be as fast as t= he > > > > > packet receiving rate. From application point of view the prob= lem > > > > > will be when hardware flow insertion takes longer than software > > > > > flow insertion. At least application has to know the cost of > > > > > inserting/deleting a rule in hardware beforehand. Otherwise how > > > > > application can choose the right flow candidate for hardware. M= y > > > > > point > > > > here is application is expecting a deterministic behavior from a > > > > classifier while inserting and deleting rules. > > > > > > > > Understood, however it will be difficult to estimate, particularl= y > > > > if a PMD must rearrange flow rules to make room for a new one due= to > > > > priority levels collision or some other HW-related reason. I mean= , > > > > spent time cannot be assumed to be constant, even PMDs cannot kno= w > > > > in advance because it also depends on the performance of the host= CPU. > > > > > > > > Such applications may find it easier to measure elapsed time for = the > > > > rules they create, make statistics and extrapolate from this > > > > information for future rules. I do not think the PMD can help muc= h here. > > > [Sugesh] From an application point of view this can be an issue. > > > Even there is a security concern when we program a short lived flow= . > > > Lets consider the case, > > > > > > 1) Control plane programs the hardware with Queue termination flow. > > > 2) Software dataplane programmed to treat the packets from the spec= ific > > queue accordingly. > > > 3) Remove the flow from the hardware. (Lets consider this is a long= wait > > process..). > > > Or even there is a chance that hardware take more time to report th= e > > > status than removing it physically . Now the packets in the queue n= o longer > > consider as matched/flow hit. > > > . This is due to the software dataplane update is yet to happen. > > > We must need a way to sync between software datapath and classifier > > > APIs even though they are both programmed from a different control > > thread. > > > > > > Are we saying these APIs are only meant for user defined static flo= ws?? > >=20 > > No, that is definitely not the intent. These are good points. > >=20 > > With the specified API, applications may have to adapt their logic an= d take > > extra precautions in order to remain on the safe side at all times. > >=20 > > For your above example, the application cannot assume a rule is > > added/deleted as long as the PMD has not completed the related operat= ion, > > which means keeping the SW rule/fallback in place in the meantime. Sh= ould > > handle security concerns as long as after removing a rule, packets en= d up in a > > default queue entirely processed by SW. Obviously this may worsen > > response time. > >=20 > > The ID action can help with this. By knowing which rule a received pa= cket is > > associated with, processing can be temporarily offloaded by another t= hread > > without much complexity. > [Sugesh] Setting ID for every flow may not viable especially when the s= ize of ID > is small(just only 8 bits). I am not sure is this a valid case though. Agreed, I'm not saying this solution works for all devices, particularly those that do not support ID at all. > How about a hardware flow flag in packet descriptor that set when the > packets hits any hardware rule. This way software doesn=E2=80=99t worry= /blocked by a > hardware rule . Even though there is an additional overhead of validati= ng this flag, > software datapath can identify the hardware processed packets easily. > This way the packets traverses the software fallback path until the rul= e configuration is > complete. This flag avoids setting ID action for every hardware flow th= at are configuring. That makes sense. I see it as a sort of single bit ID but it could be implemented through a different action for less capable devices. PMDs tha= t support 32 bit IDs could reuse the same code for both features. I understand you'd prefer having this feature always present, however we already know that not all PMDs/devices support it, and like everything el= se this is a kind of offload that needs to be explicitly requested by the application as it may not be needed. If we go with the separate action, then perhaps it would make sense to rename "ID" to "MARK" to make things clearer: RTE_FLOW_ACTION_TYPE_FLAG /* Flag packets processed by flow rule. */ RTE_FLOW_ACTION_TYPE_MARK /* Attach a 32 bit value to a packet. */ I guess the result of the FLAG action would be something in ol_flag. Thoughts? > > I think applications have to implement SW fallbacks all the time, as = even > > some sort of guarantee on the flow rule processing time may not be en= ough > > to avoid misdirected packets and related security issues. > [Sugesh] Software fallback will be there always. However I am little bi= t confused on > the way software going to identify the packets that are already hardwar= e processed . I feel we need some > notification in the packet itself, when a hardware rule hits. ID/flag/a= ny other options? Yeah I think so too, as long as it is optional because we cannot assume a= ll PMDs will support it. > > Let's wait for applications to start using this API and then consider= an extra > > set of asynchronous / real-time functions when the need arises. It sh= ould not > > impact the way rules are specified > [Sugesh] Sure. I think the rule definition may not impact with this. Thanks for your comments. --=20 Adrien Mazarguil 6WIND