From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B306BC761A6 for ; Tue, 4 Apr 2023 10:29:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234218AbjDDK3W (ORCPT ); Tue, 4 Apr 2023 06:29:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234116AbjDDK3V (ORCPT ); Tue, 4 Apr 2023 06:29:21 -0400 Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C40451989 for ; Tue, 4 Apr 2023 03:29:18 -0700 (PDT) Received: by mail-ed1-x52f.google.com with SMTP id y4so128617408edo.2 for ; Tue, 04 Apr 2023 03:29:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dectris.com; s=google; t=1680604157; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=xvJX6OzPdeaHy4wK20GTpy5l0iHRjfHg5mh9C9oOE9I=; b=CYSIsWBn//M7f3AfOaLBKAAVFwHdJdeDxosOvMmzf+rLEcevbsRhd6IBKmavR4y5sY mVmwTCCXIjQxhv6ZKvye1nBMzydoRuddfwo0Ms2bYnFWvHNpdnPcDZJ1XpPy7ECl6qsP 33NgH5Vx+dOoZd0nn2EZ4hA4sQ6aibceWOcX0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680604157; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xvJX6OzPdeaHy4wK20GTpy5l0iHRjfHg5mh9C9oOE9I=; b=lUagmdhNasPoA95Xtlbb3GzACyXj2xPb66o7x2yUfsH+qMAVXQ0jv/SJAGD5IjZLB4 Tvj3sDhw/MhHtDp3GfPwIwkvOrBYZdlHzozHSRfZoHicjQZLhmz6wzotmUADuEIa04Nv TMNijq0A1AbeD1tLXR3Ztm0mrGFvKbbwN0ryVHbwomqTs0zJfB+aNKqTHF+7zEz75izj ZUY+pQ9wSBDkCdMkX6HcEYB+7Sp4+kPXyda3lczBIAKMIaJE8VqBerLh3dLwox+gJLst AQce5xfdBl3XpPJb6BfRWqT4sQhWrLYE7sKJszmosz/X89B1hCzFYsZp5CDgG2VVNNYI /c2A== X-Gm-Message-State: AAQBX9dw50T+DRF8Z9fRinraU7UEu4PkWwaOYT27KGM3Jo2EJ1wcynIV DLo1gAIfts8H1hx6ZzNZ7q2AiytaavwxqnKvp0K30g== X-Google-Smtp-Source: AKy350ZcLXYZc/9zeTneMYSnKAyOHqt2QCJEcTpbmgqL8+BLXHbuJ2Z2YsyyleJ800gVKUEllBi0iQnxGPNoPt/YsHY= X-Received: by 2002:a50:9ec2:0:b0:502:7551:86c7 with SMTP id a60-20020a509ec2000000b00502755186c7mr1105970edf.4.1680604157350; Tue, 04 Apr 2023 03:29:17 -0700 (PDT) MIME-Version: 1.0 References: <20230329180502.1884307-1-kal.conley@dectris.com> <20230329180502.1884307-9-kal.conley@dectris.com> In-Reply-To: From: Kal Cutter Conley Date: Tue, 4 Apr 2023 12:33:58 +0200 Message-ID: Subject: Re: [PATCH bpf-next v2 08/10] xsk: Support UMEM chunk_size > PAGE_SIZE To: Magnus Karlsson Cc: =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org > > > Is not the max 64K as you test against XDP_UMEM_MAX_CHUNK_SIZE in > > > xdp_umem_reg()? > > > > The absolute max is 64K. In the case of HPAGE_SIZE < 64K, then it > > would be HPAGE_SIZE. > > Is there such a case when HPAGE_SIZE would be less than 64K? If not, > then just write 64K. Yes. While most platforms have HPAGE_SIZE defined to a compile-time constant >= 64K (very often 2M) there are platforms (at least ia64 and powerpc) where the hugepage size is configured at boot. Specifically, in the case of Itanium (ia64), the hugepage size may be configured at boot to any valid page size > PAGE_SIZE (e.g. 8K). See: https://elixir.bootlin.com/linux/latest/source/arch/ia64/mm/hugetlbpage.c#L159 > > > > > static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address) > > > > { > > > > +#ifdef CONFIG_HUGETLB_PAGE > > > > > > Let us try to get rid of most of these #ifdefs sprinkled around the > > > code. How about hiding this inside xdp_umem_is_hugetlb() and get rid > > > of these #ifdefs below? Since I believe it is quite uncommon not to > > > have this config enabled, we could simplify things by always using the > > > page_size in the pool, for example. And dito for the one in struct > > > xdp_umem. What do you think? > > > > I used #ifdef for `page_size` in the pool for maximum performance when > > huge pages are disabled. We could also not worry about optimizing this > > uncommon case though since the performance impact is very small. > > However, I don't find the #ifdefs excessive either. > > Keep them to a minimum please since there are few of them in the > current code outside of some header files. And let us assume that > CONFIG_HUGETLB_PAGE is the common case. > Would you be OK if I just remove the ones from xsk_buff_pool? I think the code in xdp_umem.c is quite readable and the #ifdefs are really only used in xdp_umem_pin_pages.