All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	Magnus Karlsson <magnus.karlsson@intel.com>,
	Michal Kubiak <michal.kubiak@intel.com>,
	Larysa Zaremba <larysa.zaremba@intel.com>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Christoph Hellwig <hch@lst.de>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next v2 05/12] iavf: always use a full order-0 page
Date: Fri, 26 May 2023 14:52:57 +0200	[thread overview]
Message-ID: <88c4f8c4-085c-1934-536b-0fc8a38205c7@intel.com> (raw)
In-Reply-To: <9acb1863f53542b6bd247ad641b8c0fa@AcuMS.aculab.com>

From: David Laight <David.Laight@ACULAB.COM>
Date: Fri, 26 May 2023 08:57:31 +0000

> From: Alexander Lobakin
>> Sent: 25 May 2023 13:58

[...]

>> 4096 page
>> 64 head, 320 tail
>> 3712 HW buffer size
>> 3686 max MTU w/o frags
> 
> I'd have thought it was important to pack multiple buffers for
> MTU 1500 into a single page.
> 512 bytes split between head and tail room really ought to
> be enough for most cases.
> 
> Is much tailroom ever used for received packets?

You don't have any tailroom at all when you split 4k page into two
halves on x86_64. From those 512 bytes, 320+ are used for
skb_shared_info. And then you're left with 192 bytes or even less (with
increased MAX_SKB_FRAGS, which becomes a new trend thanks to Eric and
Big TCP). XDP requires 256 bytes of headroom -- and you already don't
have them even with the default number of frags.

> It is used to append data to packets being sent - but that isn't
> really relevant here.
> 
> While the unused memory is moderate for 4k pages, it is horrid
> for anything with large pages - think 64k and above.

But hey, there's always unused space and it's arbitrary whether to treat
it "horrid". For example, imagine a machine mostly handling 64-byte
traffic. From that point of view, even splitting pages into 2 halves
still is "horrid" -- 2048 bytes of truesize only to receive 64 bytes :>

> IIRC large pages are common on big PPC and maybe some arm cpus.

Even in that case, the percentage of 4k-page machines running those NICs
are much higher than > 4k. I thought we usually optimize things for the
most common usecases. Adding a couple hundred lines, branches on
hotpaths, limitations etc. only to serve a particular architecture... Dunno.
I have 16k pages on my private development machines and I have no issues
with using 1 page per frame in their NIC drivers :s

> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
> 

Thanks,
Olek

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Larysa Zaremba <larysa.zaremba@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	Eric Dumazet <edumazet@google.com>,
	Michal Kubiak <michal.kubiak@intel.com>,
	"intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Magnus Karlsson <magnus.karlsson@intel.com>
Subject: Re: [Intel-wired-lan] [PATCH net-next v2 05/12] iavf: always use a full order-0 page
Date: Fri, 26 May 2023 14:52:57 +0200	[thread overview]
Message-ID: <88c4f8c4-085c-1934-536b-0fc8a38205c7@intel.com> (raw)
In-Reply-To: <9acb1863f53542b6bd247ad641b8c0fa@AcuMS.aculab.com>

From: David Laight <David.Laight@ACULAB.COM>
Date: Fri, 26 May 2023 08:57:31 +0000

> From: Alexander Lobakin
>> Sent: 25 May 2023 13:58

[...]

>> 4096 page
>> 64 head, 320 tail
>> 3712 HW buffer size
>> 3686 max MTU w/o frags
> 
> I'd have thought it was important to pack multiple buffers for
> MTU 1500 into a single page.
> 512 bytes split between head and tail room really ought to
> be enough for most cases.
> 
> Is much tailroom ever used for received packets?

You don't have any tailroom at all when you split 4k page into two
halves on x86_64. From those 512 bytes, 320+ are used for
skb_shared_info. And then you're left with 192 bytes or even less (with
increased MAX_SKB_FRAGS, which becomes a new trend thanks to Eric and
Big TCP). XDP requires 256 bytes of headroom -- and you already don't
have them even with the default number of frags.

> It is used to append data to packets being sent - but that isn't
> really relevant here.
> 
> While the unused memory is moderate for 4k pages, it is horrid
> for anything with large pages - think 64k and above.

But hey, there's always unused space and it's arbitrary whether to treat
it "horrid". For example, imagine a machine mostly handling 64-byte
traffic. From that point of view, even splitting pages into 2 halves
still is "horrid" -- 2048 bytes of truesize only to receive 64 bytes :>

> IIRC large pages are common on big PPC and maybe some arm cpus.

Even in that case, the percentage of 4k-page machines running those NICs
are much higher than > 4k. I thought we usually optimize things for the
most common usecases. Adding a couple hundred lines, branches on
hotpaths, limitations etc. only to serve a particular architecture... Dunno.
I have 16k pages on my private development machines and I have no issues
with using 1 page per frame in their NIC drivers :s

> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
> 

Thanks,
Olek
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

  reply	other threads:[~2023-05-26 12:53 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-25 12:57 [Intel-wired-lan] [PATCH net-next v2 00/12] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-05-25 12:57 ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 01/12] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 02/12] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-30 15:29   ` Alexander H Duyck
2023-05-30 15:29     ` [Intel-wired-lan] " Alexander H Duyck
2023-05-30 16:22     ` Alexander Lobakin
2023-05-30 16:22       ` [Intel-wired-lan] " Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 03/12] iavf: optimize Rx buffer allocation a bunch Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-30 16:18   ` Alexander H Duyck
2023-05-30 16:18     ` [Intel-wired-lan] " Alexander H Duyck
2023-05-31 11:14     ` Maciej Fijalkowski
2023-05-31 11:14       ` [Intel-wired-lan] " Maciej Fijalkowski
2023-05-31 15:22       ` Alexander Lobakin
2023-05-31 15:22         ` [Intel-wired-lan] " Alexander Lobakin
2023-05-31 15:13     ` Alexander Lobakin
2023-05-31 15:13       ` Alexander Lobakin
2023-05-31 17:22       ` Alexander Duyck
2023-06-02 13:58         ` Alexander Lobakin
2023-06-02 13:58           ` Alexander Lobakin
2023-06-02 15:04           ` Alexander Duyck
2023-06-02 15:04             ` Alexander Duyck
2023-06-02 16:15             ` Alexander Lobakin
2023-06-02 16:15               ` Alexander Lobakin
2023-06-02 17:50               ` Alexander Duyck
2023-06-02 17:50                 ` Alexander Duyck
2023-06-06 12:47                 ` Alexander Lobakin
2023-06-06 12:47                   ` Alexander Lobakin
2023-06-06 14:24                   ` Alexander Duyck
2023-06-06 14:24                     ` Alexander Duyck
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 04/12] iavf: remove page splitting/recycling Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 05/12] iavf: always use a full order-0 page Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-26  8:57   ` David Laight
2023-05-26  8:57     ` [Intel-wired-lan] " David Laight
2023-05-26 12:52     ` Alexander Lobakin [this message]
2023-05-26 12:52       ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 06/12] net: skbuff: don't include <net/page_pool.h> into <linux/skbuff.h> Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-27  3:54   ` Jakub Kicinski
2023-05-27  3:54     ` [Intel-wired-lan] " Jakub Kicinski
2023-05-30 13:12     ` Alexander Lobakin
2023-05-30 13:12       ` [Intel-wired-lan] " Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 07/12] net: page_pool: avoid calling no-op externals when possible Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 08/12] net: page_pool: add DMA-sync-for-CPU inline helpers Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 09/12] iavf: switch to Page Pool Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 10/12] libie: add common queue stats Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 11/12] libie: add per-queue Page Pool stats Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin
2023-05-25 12:57 ` [Intel-wired-lan] [PATCH net-next v2 12/12] iavf: switch queue stats to libie Alexander Lobakin
2023-05-25 12:57   ` Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=88c4f8c4-085c-1934-536b-0fc8a38205c7@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=hch@lst.de \
    --cc=ilias.apalodimas@linaro.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=larysa.zaremba@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.