From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EF6EC5B578 for ; Mon, 1 Jul 2019 21:20:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 54B0721721 for ; Mon, 1 Jul 2019 21:20:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="RiaOTsEr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726787AbfGAVUI (ORCPT ); Mon, 1 Jul 2019 17:20:08 -0400 Received: from mail-qk1-f195.google.com ([209.85.222.195]:44069 "EHLO mail-qk1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726586AbfGAVUI (ORCPT ); Mon, 1 Jul 2019 17:20:08 -0400 Received: by mail-qk1-f195.google.com with SMTP id p144so12235819qke.11 for ; Mon, 01 Jul 2019 14:20:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :organization:mime-version:content-transfer-encoding; bh=TWsp+Ci3cG3EdUPzHwmfsrPWgFxsFW/NIRYtp0vKG3U=; b=RiaOTsErZ8KcpskY5SXl3c3TcBuY7EwdEWSu6UOS7bLuPpKXnSaNyw4YciJKziq+K/ hxQ36KSdmRd9vsE9+6MAtKt8Ca83dYFRP3VsCyIQfZ0VFT7Y7uHJ7hdniMzVVrfcfF+A ADtI7pHo25Dq+tjdSUqE7nBymQ3lRvFGHqnmK3/E5DPF4MG9bDYyJjKgvsmzhwZRA9aP EiwwC4+kSva3Q7bLUT1zGvfbq0azHIUXfyYv+2JbceQyrRg43nMB/GeK48oC5I66xsWP uwGz+b1w/iaOW6ulrIASlcVGqybVPIHf5wcWnKyzwp5tywehakQecCO4xtfdjyXUdGBc /R3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:organization:mime-version:content-transfer-encoding; bh=TWsp+Ci3cG3EdUPzHwmfsrPWgFxsFW/NIRYtp0vKG3U=; b=HxUzX3nwI3LCKP56wID5G8VvgLH4pkDJ4aDsz/l3EjgAL/nK7d4mSAiM1ksRpnsD6c g1hcpwdVoXHhIBAwEES4N+9HEquZQckh2/XAQmMBuO6UweROJ44diHL7ye93WN4/eciw NOfrAu5nbdE6mCKzbCJr1e749Bzg6jNYbB8rSEseAFkG3ZJPmRsaySJxc7vhuB0upkck sL85dUaMveMijNMr51shL1SMmgMzUb4F6sU3AsduiTg+oXWuxuR4un2dcf0T6GdjIto0 wjRtyeeoSUuhEDATPsGL6XEua8BmOLI7ynqExg+w2XpX8+4moXbYXV8+gGkWBSNBlGj6 IP3Q== X-Gm-Message-State: APjAAAUkwG6RKgs9HmNbHwPlYiAF/fXw1bHcmPLiQ+d1+l5DaKKluZEz aqZWPY/a2+MuCpEKYytpMuuviw== X-Google-Smtp-Source: APXvYqxNT9WoAy1VbQKKLaHtJEHrWiC2xU1PxjaTtlvDwBOTIhiNpzI7sTH6WlBksIE11DhFh+oXHA== X-Received: by 2002:a37:c248:: with SMTP id j8mr12648978qkm.494.1562016007461; Mon, 01 Jul 2019 14:20:07 -0700 (PDT) Received: from cakuba.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id x35sm6256872qta.11.2019.07.01.14.20.06 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Mon, 01 Jul 2019 14:20:07 -0700 (PDT) Date: Mon, 1 Jul 2019 14:20:02 -0700 From: Jakub Kicinski To: "Laatz, Kevin" Cc: Jonathan Lemon , netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, bjorn.topel@intel.com, magnus.karlsson@intel.com, bpf@vger.kernel.org, intel-wired-lan@lists.osuosl.org, bruce.richardson@intel.com, ciara.loftus@intel.com Subject: Re: [PATCH 00/11] XDP unaligned chunk placement support Message-ID: <20190701142002.1b17cc0b@cakuba.netronome.com> In-Reply-To: <07e404eb-f712-b15a-4884-315aff3f7c7d@intel.com> References: <20190620083924.1996-1-kevin.laatz@intel.com> <20190627142534.4f4b8995@cakuba.netronome.com> <07e404eb-f712-b15a-4884-315aff3f7c7d@intel.com> Organization: Netronome Systems, Ltd. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Mon, 1 Jul 2019 15:44:29 +0100, Laatz, Kevin wrote: > On 28/06/2019 21:29, Jonathan Lemon wrote: > > On 28 Jun 2019, at 9:19, Laatz, Kevin wrote: =20 > >> On 27/06/2019 22:25, Jakub Kicinski wrote: =20 > >>> I think that's very limiting.=C2=A0 What is the challenge in providing > >>> aligned addresses, exactly? =20 > >> The challenges are two-fold: > >> 1) it prevents using arbitrary buffer sizes, which will be an issue=20 > >> supporting e.g. jumbo frames in future. > >> 2) higher level user-space frameworks which may want to use AF_XDP,=20 > >> such as DPDK, do not currently support having buffers with 'fixed'=20 > >> alignment. > >> =C2=A0=C2=A0=C2=A0 The reason that DPDK uses arbitrary placement is th= at: > >> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 - it would stop things working o= n certain NICs which need the=20 > >> actual writable space specified in units of 1k - therefore we need 2k= =20 > >> + metadata space. > >> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 - we place padding between buffe= rs to avoid constantly=20 > >> hitting the same memory channels when accessing memory. > >> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 - it allows the application to c= hoose the actual buffer size=20 > >> it wants to use. > >> =C2=A0=C2=A0=C2=A0 We make use of the above to allow us to speed up pr= ocessing=20 > >> significantly and also reduce the packet buffer memory size. > >> > >> =C2=A0=C2=A0=C2=A0 Not having arbitrary buffer alignment also means an= AF_XDP driver=20 > >> for DPDK cannot be a drop-in replacement for existing drivers in=20 > >> those frameworks. Even with a new capability to allow an arbitrary=20 > >> buffer alignment, existing apps will need to be modified to use that=20 > >> new capability. =20 > > > > Since all buffers in the umem are the same chunk size, the original=20 > > buffer > > address can be recalculated with some multiply/shift math. However,=20 > > this is > > more expensive than just a mask operation. =20 >=20 > Yes, we can do this. That'd be best, can DPDK reasonably guarantee the slicing is uniform? E.g. it's not desperate buffer pools with different bases? > Another option we have is to add a socket option for querying the=20 > metadata length from the driver (assuming it doesn't vary per packet).=20 > We can use that information to get back to the original address using=20 > subtraction. Unfortunately the metadata depends on the packet and how much info=20 the device was able to extract. So it's variable length. > Alternatively, we can change the Rx descriptor format to include the=20 > metadata length. We could do this in a couple of ways, for example,=20 > rather than returning the address as the start of the packet, instead=20 > return the buffer address that was passed in, and adding another 16-bit=20 > field to specify the start of packet offset with that buffer. If using=20 > another 16-bits of the descriptor space is not desirable, an alternative= =20 > could be to limit umem sizes to e.g. 2^48 bits (256 terabytes should be=20 > enough, right :-) ) and use the remaining 16 bits of the address as a=20 > packet offset. Other variations on these approach are obviously possible= =20 > too. Seems reasonable to me.. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jakub Kicinski Date: Mon, 1 Jul 2019 14:20:02 -0700 Subject: [Intel-wired-lan] [PATCH 00/11] XDP unaligned chunk placement support In-Reply-To: <07e404eb-f712-b15a-4884-315aff3f7c7d@intel.com> References: <20190620083924.1996-1-kevin.laatz@intel.com> <20190627142534.4f4b8995@cakuba.netronome.com> <07e404eb-f712-b15a-4884-315aff3f7c7d@intel.com> Message-ID: <20190701142002.1b17cc0b@cakuba.netronome.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: On Mon, 1 Jul 2019 15:44:29 +0100, Laatz, Kevin wrote: > On 28/06/2019 21:29, Jonathan Lemon wrote: > > On 28 Jun 2019, at 9:19, Laatz, Kevin wrote: > >> On 27/06/2019 22:25, Jakub Kicinski wrote: > >>> I think that's very limiting.? What is the challenge in providing > >>> aligned addresses, exactly? > >> The challenges are two-fold: > >> 1) it prevents using arbitrary buffer sizes, which will be an issue > >> supporting e.g. jumbo frames in future. > >> 2) higher level user-space frameworks which may want to use AF_XDP, > >> such as DPDK, do not currently support having buffers with 'fixed' > >> alignment. > >> ??? The reason that DPDK uses arbitrary placement is that: > >> ??? ??? - it would stop things working on certain NICs which need the > >> actual writable space specified in units of 1k - therefore we need 2k > >> + metadata space. > >> ??? ??? - we place padding between buffers to avoid constantly > >> hitting the same memory channels when accessing memory. > >> ??? ??? - it allows the application to choose the actual buffer size > >> it wants to use. > >> ??? We make use of the above to allow us to speed up processing > >> significantly and also reduce the packet buffer memory size. > >> > >> ??? Not having arbitrary buffer alignment also means an AF_XDP driver > >> for DPDK cannot be a drop-in replacement for existing drivers in > >> those frameworks. Even with a new capability to allow an arbitrary > >> buffer alignment, existing apps will need to be modified to use that > >> new capability. > > > > Since all buffers in the umem are the same chunk size, the original > > buffer > > address can be recalculated with some multiply/shift math. However, > > this is > > more expensive than just a mask operation. > > Yes, we can do this. That'd be best, can DPDK reasonably guarantee the slicing is uniform? E.g. it's not desperate buffer pools with different bases? > Another option we have is to add a socket option for querying the > metadata length from the driver (assuming it doesn't vary per packet). > We can use that information to get back to the original address using > subtraction. Unfortunately the metadata depends on the packet and how much info the device was able to extract. So it's variable length. > Alternatively, we can change the Rx descriptor format to include the > metadata length. We could do this in a couple of ways, for example, > rather than returning the address as the start of the packet, instead > return the buffer address that was passed in, and adding another 16-bit > field to specify the start of packet offset with that buffer. If using > another 16-bits of the descriptor space is not desirable, an alternative > could be to limit umem sizes to e.g. 2^48 bits (256 terabytes should be > enough, right :-) ) and use the remaining 16 bits of the address as a > packet offset. Other variations on these approach are obviously possible > too. Seems reasonable to me..