From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AF5AC5DF60 for ; Fri, 8 Nov 2019 08:35:27 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id A6BF62178F for ; Fri, 8 Nov 2019 08:35:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6BF62178F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=solarflare.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A8EC01BF9B; Fri, 8 Nov 2019 09:35:25 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 1615E1BF65 for ; Fri, 8 Nov 2019 09:35:24 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 26A85B40067; Fri, 8 Nov 2019 08:35:22 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Fri, 8 Nov 2019 08:35:14 +0000 To: Ori Kam , Thomas Monjalon CC: "dev@dpdk.org" , "pbhagavatula@marvell.com" , "ferruh.yigit@intel.com" , "jerinj@marvell.com" , "John McNamara" , Marko Kovacevic , Adrien Mazarguil , "david.marchand@redhat.com" , "ktraynor@redhat.com" , Olivier Matz References: <20191025152142.12887-1-pbhagavatula@marvell.com> <3078181.9TjvbByyqQ@xps> <8f5caa1c-7a13-f2f2-a391-792a78da454f@solarflare.com> <398f8345-5a8c-738e-6534-57e9ace6aebe@solarflare.com> From: Andrew Rybchenko Openpgp: preference=signencrypt Autocrypt: addr=arybchenko@solarflare.com; keydata= mQINBF2681gBEACbdTxu8eLL3UX2oAelsnK9GkeaJeUYSOHPJQpV7RL/iaIskqTwBRnhjXt7 j9UEwGA+omnOmqQMpeQTb/F9Ma2dYE+Hw4/t/1KVjxr3ehFaASvwR4fWJfO4e2l/Rk4rG6Yi 5r6CWU2y8su2654Fr8KFc+cMGOAgKoZTZHZsRy5lHpMlemeF+VZkv8L5sYJWPnsypgqlCG3h v6lbtfZs+QqYbFH6bqoZwBAl5irmxywGR7ZJr1GLUZZ1lfdazSY8r6Vz0/Ip/KVxGu2uxo81 QCsAj0ZsQtwji9Sds/prTiPrIjx8Fc/tfbnAuVuPcnPbczwCJACzQr4q26XATL3kVuZhSBWh 4XfO/EAUuEq5AemUG5DDTM87g7Lp4eT9gMZB6P+rJwWPNWTiV3L7Cn+fO+l9mTPnOqdzBgDe OaulKiNSft1o0DY4bGzOmM2ad2cZt0jfnbMPMTE9zsr6+RFa+M8Ct20o6U1MUE4vP6veErMK of4kZ8PdoMM+Sq1hxMPNtlcVBSP9xMmdSZPlfDYI5VWosOceEcz7XZdjBJKdwKuz70V7eac4 ITSxgNFCTbeJ03zL2MR5s0IvD9ghISAwZ6ieCjU5UATn5+63qpD0nVNLsAdb/UpfvQcKAmvj 0fKlxu/PMVkjBa7/4cfNogYOhWDKUO+1pMaFwvb6/XTo6uMpfQARAQABtCxBbmRyZXcgUnli Y2hlbmtvIDxhcnliY2hlbmtvQHNvbGFyZmxhcmUuY29tPokCVAQTAQoAPhYhBP6NPgcKRj/Y X0yXQahue0sAy4m+BQJduvNYAhsDBQkB4TOABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJ EKhue0sAy4m+t3gP/j1MNc63CEozZo1IZ2UpVPAVWTYbLdPjIRdFqhlwvZYIgGIgIBk3ezKL K0/oc4ZeIwL6wQ5+V24ahuXvvcxLlKxfbJ6lo2iQGC7GLGhsDG9Y2k6sW13/sTJB/XuR2yov k5FtIgJ+aHa1PDZnepnGGOt9ka9n/Jzrc9WKYapOIIyLRe9U26ikoVgyqsD37PVeq5tLWHHA NGTUKupe9G6DFWidxx0KzyMoWDTbW2AWYcEmV2eQsgRT094AZwLFN5ErfefYzsGdO8TAUU9X YTiQN2MvP1pBxY/r0/5UfwV4UKBcR0S3ZvzyvrPoYER2Kxdf/qurx0Mn7StiCQ/JlNZb/GWQ TQ7huduuZHNQKWm7ufbqvKSfbPYvfl3akj7Wl8/zXhYdLqb5mmK45HXrgYGEqPN53OnK2Ngx IgYKEWr05KNv09097jLT5ONgYvszflqlLIzC4dV245g7ucuf9fYmsvmM1p/gFnOJBJL18YE5 P1fuGYNfLP+qp4WMiDqXlzaJfB4JcinyU49BXUj3Utd6f6sNBsO8YWcLbKBV9WmA324S3+wj f4NPRp3A5E+6OmTVMLWire2ZvnYp3YvifUj1r8lhoZ2B2vKuWwiTlHOKYBEjnOQJQnqYZEF0 JQQ1xzVDBQKE01BPlA3vy6BGWe6I4psBVqMOB9lAev/H+xa4u6Z3uQINBF269JsBEAC2KB3W 8JES/fh74avN7LOSdK4QA7gFIUQ4egVL81KnxquLzzilABuOhmZf3Rq6rMHSM8xmUAWa7Dkt YtzXStjEBI/uF0mAR3mMz1RcL2Wp+WD/15HjVpA7hPjXSEsWY0K2ymPerK4yrLcfFTHdMonY JfuACCC9NtOZxrWHOJoUS+RT7AWk80q/6D2iwQ47/2dBTznVG+gSeHSes9l91TB09w6f9JX/ sT+Ud0NQfm7HJ7t2pmGI9O6Po/NLZsDogmnIpJp/WwYOZN9JK7u2FyX2UyRzR8jK42aJkRsh DXs16Cc2/eYGakjrdO3x9a+RoxN7EuFtYhGR1PzMXdUiB5i+FyddYXkYUyO43QE/3VPA5l1v TUOagzZq6aONsdNonGJkV3TIG3JmUNtM+D/+r6QKzmgoJ8w576JxEZI09I/ZFN+g7BnUmlMx 6Z3IUOXVX/SWfGFga0YajwajHz03IBhChEbYbbqndVhmshu2GFURxrfUPYWdDXEqkh+08a5U Didia9jm2Opv4oE1e1TXAePyYJl/Zyps4Cv00GObAxibvMBQCUZQ+IBnNldRBOwXXRQV2xpx P+9iO1VYA/QXn0KqRK+SH1JGRXbJYi42YFaW1gE0EU0fiR2Wb9pK+doNEjjOhlzUGuvOEAUS +4m0m3dlfEvpCV9GMr7ERRpZzh9QkQARAQABiQI8BBgBCgAmFiEE/o0+BwpGP9hfTJdBqG57 SwDLib4FAl269JsCGwwFCQlmAYAACgkQqG57SwDLib7x6g//e+eCtNnJz7qFGbjWRJYNLCe5 gQwkhdyEGk4omr3VmjGj3z9kNFy/muh4pmHUngSAnnpwZggx14N4hhKf9y8G4Dwvsqa6b1zB Jq/c4t/SBDtGW4M/E331N04PaQZpcrbTfp1KqHNknk2N7yOk4CcoLVuIZmA5tPguASV8aAfz ZwhWAwn6vUEw9552eXEAnGFGDTCbyryNwzB5jtVQOEEDjTxcCkpcXMB45Tb1QUslRTu/sBAe HhPCQSUcJHR+KOq+P6yKICGAr291PZd6Qc7C3UyE+A3pY/UfdEVWj0STBWx1qvYLaHLrI4O9 KXDgh7luLjZZafcueCaPYmNo4V2lmNb3+7S4TvqhoZS+wN+9ldRQ4gH3wmRZybN6Y/ZCqxol RaZpE3AqdWsGvIgAkD0FpmtZNii9s2pnrhw0K6S4t4tYgXGTossxNSJUltfFQZdXM1xkZhtv dBZuUEectbZWuviGvQXahOMuH2pM64mx2hpdZzPcI2beeJNHkAsGT2KcaMETgvtHUBFRlLVB YxsUYz3UZmi2JSua4tbcGd6iWVN90eb8CxszYtivfpz6o2nPSjNwg0NaVGSHXjAK0tdByZ9t SkwjC3tEPljVycRSDpbauogOiAkvjENfaPd/H26V5hY822kaclaKDAW6ZG9UKiMijcAgb9u5 CJoOyqE8aGS5Ag0EXbr1RwEQAMXZHbafqmZiu6Kudp+Filgdkj2/XJva5Elv3fLfpXvhVt0Y if5Rzds3RpffoLQZk9nPwK8TbZFqNXPu7HSgg9AY7UdCM94WRFTkUCGKzbgiqGdXZ7Vyc8cy teGW+BcdfQycDvjfy50T3fO4kJNVp2LDNdknPaZVe8HJ80Od63+9ksB6Ni+EijMkh6Uk3ulB CSLnT4iFV57KgU2IsxOQVLnm+0bcsWMcCnGfphkY0yKP+aJ6MfmZkEeaDa7kf24N14ktg50m vOGDitcxA/+XXQXOsOIDJx1VeidxYsQ2FfsKu1G8+G6ejuaLf4rV5MI/+B/tfLbbOdikM5PF pxZVgTir9q13qHumMxdme7w5c7hybW412yWAe9TsrlXktFmFjRSFzAAxQhQSQxArS6db4oBk yeYJ59mW52i4occkimPWSm/raSgdSM+0P6zdWUlxxj+r1qiLgCYvruzLNtp5Nts5tR/HRQjE /ohQYaWDSVJEsc/4eGmgwzHzmvHtXeKkasn01381A1Lv3xwtpnfwERMAhxBZ8EGKEkc5gNdk vIPhknnGgPXqKmE1aWu8LcHiY+RHAF8gYPCDMuwyzBYnbiosKcicuIUp0Fj8XIaPao6F+WTi In4UOrqrYhsaCUvhVjsTBbNphGih9xbFJ8E+lkTLL8P3umtTcMPnpsB4xqcDABEBAAGJBHIE GAEKACYWIQT+jT4HCkY/2F9Ml0GobntLAMuJvgUCXbr1RwIbAgUJCWYBgAJACRCobntLAMuJ vsF0IAQZAQoAHRYhBNTYjdjWgdaEN5MrAN+9UR5r/4d3BQJduvVHAAoJEN+9UR5r/4d3EiQP /3lyby6v49HTU94Q2Fn2Xat6uifR7kWE5SO/1pUwYzx6v+z5K2jqPgqUYmuNoejcGl0CTNhg LbsxzUmAuf1OTAdE+ZYvOAjjKQhY4haxHc4enby/ltnHfWJYWJZ9UN5SsIQLvITvYu6rqthO CYjpXJhwkj3ODmC9H1TrvjrBGc6i7CTnR8RCjMEwCs2LI2frHa4R6imViEr9ScMfUnzdABMQ B0T5MOg8NX92/FRjTldU2KovG0ML9mSveSvVHAoEBLy4UIs5nEDdNiO1opJgKb5CXvWQugub 7AR52phNdKVdEB0S4tigJT4NalyTaPiUhFEm+CzZpMQDJ5E+/OowaPRfN4HeJX+c8sB+vUAZ mkAaG75N+IEk5JKFK9Z+bBYgPgaBDFZYdWDB/TMH0ANt+KI5uYg0i12TB4M8pwKG1DEPUmWc F2YpvB3jnbwzsOpSFiJOOlSs6nOB0Sb5GRtPOO3h6XGj+6mzQd6tcL63c9TrrUkjq7LDkxCz SJ2hTYRC8WNX8Uw9skWo5728JNrXdazEYCenUWmYiKLNKLslXCFodUCRDh/sUiyqRwS7PHEA LYC/UIWLMomI0Yvju3KA5v3RQVXhL+Gx2CzSj3GDz9xxGhJB2LfRfjzPbTR/Z27UpjCkd8z0 Ro3Ypmi1FLQwnRgoOKDbetTAIhugEShaLTITzJAP/iRDJCQsrZah5tE8oIl81qKEmBJEGcdt HYikbpQe7ydcXhqTj7+IECa3O7azI5OhCxUH2jNyonJ/phUslHH2G1TTBZK8y4Hrx5RpuRNS esn3P9uKu9DHqBAL7DMsCPwb2p1VNnapD72DBmRhzS/e6zS2R4+r9yNv03Hv7VCxKkmtE63H qpS//qpjfrtsIcHAjnKDaDtL1LYCtHoweI+DOpKKULSAYp/JE6F8LNibPQ0/P3S5ZIJNC4QZ uESjFOalJwFIqGQdkQB7ltRNJENLrHc+2jKGOuyFHm/Sbvp5EMGdaeQ0+u8CY0P+y6oXenwx 7WrJz/GvbNoFhJoJ6RzxCMQrFgxrssVZ7w5HcUj94lbnJ6osdYE/WpSd50B6jet6LKh5revg u9XI9CoqsPQ1V4wKYYdllPuogCye7KNYNKuiiuSNpaF4gHq1ZWGArwZtWHjgc2v3LegOpRQF SwOskMKmWsUyHIRMG1p8RpkBQTqY2rGSeUqPSvaqjT0nq+SUEM6qxEXD/2Wqri/X6bamuPDb S0PkBvFD2+0zr5Bc2YkMGPBYPNGZiTp3UjmZlLfn3TiBKIC92jherY563CULjSsiBEJCOSvv 4VPLn5aAcfbCXJnE3IGCp/hPl50iQqu7BPOYBbWXeb9ptDjGCAThNxSz0WAXkmcjAFE8gdE6 Znk9 Message-ID: Date: Fri, 8 Nov 2019 11:35:10 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-GB Content-Transfer-Encoding: 8bit X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25028.003 X-TM-AS-Result: No-15.268300-8.000000-10 X-TMASE-MatchedRID: yebcs53SkkAeimh1YYHcKPZvT2zYoYOwt3aeg7g/usAutoY2UtFqGHRt l8qZJtWDwPb5NDq94CC4Rt1oNt5z9eXzBBRraVbPlVHM/F6YkvQA+JHhu0IR5pGhAvBSa2i/hWD 4TtDcfzTmOv4jqKTbv6mB0uJR3EXI07HCUMKHxrEeXaklLWU8W/aFQCjAYqKnPaYQnKWC98HccF +vY4fYYU/aheAhUMf64KKWaL84/wReEiCrIVfEl4icBKfMHlV8/qWl+m17jWHbspKx4YOD3REam nVaSH89asuoK5M4fPc4To4+gy/tYB3vsmRts5B5zNIobH2DzGEP6OWzRxLAkxEeA7eGVB5Zee+p suSMY7ouL8UO4pWj3NFn2xM6ewO7ma85uekQONziHyvyXeXh5ml5nVxdmJvH/JBcIHMqKOGN9/m WxQW0/kO0lsEAiG1w1JtxRRxxMARvi6+VrutfOPLfMXOVo51Y8C4YUiyRBCGOIsAELqL7WDvW20 YA3dzmQ+rBG9JJr6QsOf+SHtCh1Zv831J4kXLFJ9rUcVUMCtAvV5f7P0HVDMCWv3N/XH+WrrtN8 QuKS+w9qldxICYl2Y5o9296jkCAuHWnLYyryP8qkSeDPauzr9FNFFkmVYQK33Nl3elSfsoZGv4A AiyPcCxyKgn3dyA6CB0to4GVFjXcMVv1Cd+UqNB/IoRhBzVHnATnQLozPIhtGYTovTq6llSB5ka FwbmXdafif83c4LkkbhXg/7HeNBxaKzlXbEWlMiMrbc70PfcO9z+P2gwiBXnFpVyUkDmluDXFlW kcRb4ioxaujkWFSMg9a8PRuLJN3vqxXQgTXEWeAiCmPx4NwFkMvWAuahr8+gD2vYtOFhgqtq5d3 cxkNR4XqXjQcxDhy8/nMpswxzcWantKriCjadBv+sL39tEG0fHVLbCiuZA= X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--15.268300-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25028.003 X-MDID: 1573202123-SD9iAyRgt0EJ Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 11/6/19 10:42 AM, Ori Kam wrote: > >> -----Original Message----- >> From: Andrew Rybchenko >> Sent: Wednesday, November 6, 2019 8:41 AM >> To: Ori Kam ; Thomas Monjalon >> >> Cc: dev@dpdk.org; pbhagavatula@marvell.com; ferruh.yigit@intel.com; >> jerinj@marvell.com; John McNamara ; Marko >> Kovacevic ; Adrien Mazarguil >> ; david.marchand@redhat.com; >> ktraynor@redhat.com >> Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an >> offload >> >> On 11/5/19 7:37 PM, Ori Kam wrote: >>> >>>> -----Original Message----- >>>> From: Andrew Rybchenko >>>> Sent: Tuesday, November 5, 2019 1:31 PM >>>> To: Ori Kam ; Thomas Monjalon >>>> >>>> Cc: dev@dpdk.org; pbhagavatula@marvell.com; ferruh.yigit@intel.com; >>>> jerinj@marvell.com; John McNamara ; Marko >>>> Kovacevic ; Adrien Mazarguil >>>> ; david.marchand@redhat.com; >>>> ktraynor@redhat.com >>>> Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an >>>> offload >>>> >>> >>> [Snip] >>> >>>>> >>>>> Yes but like I said in Mellanox PMD for example we supported the mark only on non-transfer flows until this release. >>>>> so when the user set mark on transfer flow it was invalid. (in transfer flow if we have a miss we send the packet back to the Rx >>>>> port so the application can understand on which table the miss happened) >>>>> In this version we added the support for mark also in transfer (E-Switch) flows. >>>>> So my question before this release what should the PMD report? What should the PMD report after this release? >>>>> >>>>> Your idea was our first thought when adding the Tx meta, in that case the meta was always set in application >>>>> so we thought that this offload will enable us better function selection, but as you know we removed this capability >>>>> since it is not correct any more. >>>>> >>>>>> The above also highlights problems of the meta vs mark design. They are very >>>>>> similar and there is no any good definition of the difference and rules >>>>>> which one should be used/supported in which conditions. >>>>>> >>>>> >>>>> Mark and Meta are exactly the same, the meta is just another value that the application can use. >>>>> This is why both should act the same. >>>>> >>>>> And maybe this is the wining argument, the rte_flow validation approach was used and accepted for the meta. >>>>> So lets try it also with the mark. (please also remember that we didn't have this mark until now to somehow the >>>>> PMD worked 😊) >>>>> >>>>> Like I said before, I understand your approach, and each one of them has its own advantages and draw backs. >>>>> Lets start using the rte_flow approach and see how it goes, I promise you that if I see that it doesn't scale or cause more >>>>> issues I will be first one to submit changes. >>>> >>>> I tend to say OK, let's try. However, it must be documented >>>> in MARK action that if an application wants to use it, a rule >>>> with the action must be validated before device start. >>> >>> I agree to add this to the rte_flow mark action documation. >>> >>>> PMD may use the attempt as an indication from the application >>>> that it would like to use flow mark even if the validation >>>> fails. >>> >>> No if the PMD uses this validation as hint it should return success and >>> use the correct PMD. >> >> It would make it too strictly dependent on pattern/actions/state. > > I'm not sure I understand your comment. > Why would it make it to strict? I guess that mark action is doesn't care which items are in > the flow, so just setting eth item sound good enough. The problem is if single pattern+actions is used to check MARK support and let PMD know that application is going to use it and the flow rule is not supported, but another combination is supported, that's it. The feature is false classified as not supported. > Also I guess the mark doesn't conflict with many actions maybe the meta. > So the application doesn't have a lot of choices. > In any case and this is the most important one. In any case the application must validate that the > Nic support the mark in any case, according to RTE_FLOW definition. What is the meaning > of saying we support offload (using your segguetion) and then each flow you try to insert > will fail. So in any case the application must validate the flows it is going to use. Yes, but the decision here is per rule, not global. >>>> Ori, please, suggest formalized pattern and actions >>>> specification to use if application wants to utilize >>>> validation result as a criteria to enable/disable flow >>>> marks usage. >>> >>> I can’t do that, it depends on the application, the most basic is just "pattern eth actions mark / queue" . >>> In some cases where you need it for E-Switch if should be something like"transfer items port / eth / actions mark" >> >> If so, what application author should do if even maintainers cannot >> formalize it. It sounds like the approach does not work. > > This should be very easy for the application to know. How can I a mantianer know > what flows and what is important to the application, I need to make sure that the application > can understand if what it wants is supported. Yes and it is the problem of complex logic to make global decision at start of day. Flow API allows to avoid global decisions and verdict is returned for each specific rule, but here we need global decision which will affect further logic. >>>> What should be done if patterns to use and set >>>> of actions together with MARK are not known in advance. >>> >>> I think that the application knows which kind of traffic it expects and which actions it needs. >> >> I'm afraid it is not always true. > > Can you please give me such example, how can you develop an application without knowing what it should do? > This means the application is not optimized and impossible to test. First of all testpmd. I know that it is a test application etc, but I think it is still valid example. Second OVS, if I understand it correctly. As far as I know it allows to add a rule with various patterns and, in theory, it could result in the pattern to be used with MARK action. Basically any application which allows to add custom flow rules. I think we are running in circles here. I'll try to summarize the result of the discussion to make it easier for other developers to follow. Please, correct me if I'm wrong below. Sorry for a very-very long mail below, but I'd really like to finalize the discussion. The problem: ~~~~~~~~~~~~ PMD wants to know before port start if application wants to to use flow MARK/FLAG in the future. It is required because: 1. HW may be configured in a different way to reserve resources for MARK/FLAG delivery 2. Datapath implementation choice may depend on it (e.g. vPMD is faster, but does not support MARK) Discussed solutions: ~~~~~~~~~~~~~~~~~~~~ A. Explicit Rx offload suggested by the patch. B. Implicit by validation of a flow rule with MARK/FLAG actions used. C. Use dynamic field/flag (i.e. application registers dynamic field and/or flag and PMD uses lookup to solve the problem) plus part of (B) to discover if a feature is supported. All solutions require changes in applications which use these features. There is a deprecation notice in place which advertises DEV_RX_OFFLOAD_FLOW_MARK addition, but may be it is OK to substitute it with solution (B) or (C). Solution (C) requires changes since it should be combined with (B) in order to understand if the feature is supported. Advantages and drawbacks of solutions: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. The main drawback of (A) is a "duplication" since we already have a way to request flow MARK using rte_flow API. I don't fully agree that it is a duplication, but I agree that it sounds like duplication and complicates a bit flow MARK usage by applications. (B) complicates it as well. 2. One more drawback of the solution (A) is the necessity of similar solution for META and it eats one more offload bit. Yes, that's true and I think it is not a problem. It would make it easier for applications to find out if either MARK or META is supported. 3. The main advantage of the solution (A) is simplicity. It is simple for application to understand if it supported. It is simple in PMD to understand that it is required. It is simple to disable it - just reconfigure. Also it is easier to document it - just mention that the offload should be supported and enabled. 4. The main advantage of the solution (B) is no "duplication". I agree that it is valid argument. Solving the problem without extra entities is always nice, but unfortunately it is too complex in this case. 5. The main drawback of the solution (B) is the complexity. It is necessary to choose a flow rule which should be used as a criteria. It could be hardware dependent. Complex logic is require in PMD if it wants to address the problem and control MARK delivery based on validated flow rules. It adds dependency between start/stop processing and flow rules validation code. It is pretty complicated to document it. 6. Useless enabling of the offload in the case of solution (A) if really used flow rules do not support MARK looks like drawback as well, but easily mitigated by a combination with solution (B) and only required if the application wants to dive in the level of optimization and complexity and makes sense if application knows required flow rules in advance. So, it is not a problem in this case. 7. Solution (C) has drawbacks of the solution (B) for applications to understand if these features are supported, but no drawbacks in PMD, since explicit criteria is used to enable/disable (dynamic field/flag lookup). 8. Solution (C) is nice since it avoids "duplication". 9. The main drawback of the solution (C) is asymmetry. As it was discussed in the case of RX_TIMESTAMP (if I remember it correctly): - PMD advertises RX_TIMESTAMP offload capability - application enables the offload - PMD registers dynamic field for timestamp Solution (C): - PMD advertises nothing - application uses solution (B) to understand if these features are supported - application registers dynamic field/flag - PMD does lookup and solve the problem The asymmetry could be partially mitigated if RX_TIMESTAMP solution is changed to require an application to register dynamic fields and PMD to do lookup if the offload is enabled. So, the only difference will be in no offload in the case of flow MARK/FLAG and usage of complex logic to understand if it is supported or no. May be it would be really good since it will allow to have dynamic fields registered before mempool population. 10. Common drawback of solutions (B) and (C) is no granularity. Solution (A) may be per queue while (B) and (C) cannot be per queue. Moreover (C) looks global - for all devices. It could be really painful. (C) is nice, but I still vote for simplicity and granularity of (A). Andrew.