All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC Patch v1 0/3] Introduce ENA local page cache
@ 2021-03-09 17:10 Shay Agroskin
  2021-03-09 17:10 ` [RFC Patch v1 1/3] net: ena: implement local page cache (LPC) system Shay Agroskin
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Shay Agroskin @ 2021-03-09 17:10 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Tzalik, Guy, Belgazal, Netanel, Saidi, Ali, Herrenschmidt,
	Benjamin, Kiyanovski, Arthur, Jubran, Samih, Dagan, Noam

High incoming pps rate leads to frequent memory allocations by the napi
routine to refill the pages of the incoming packets.

On several new instances in AWS fleet, with high pps rates, these
frequent allocations create a contention between the different napi
routines.
The contention happens because the routines end up accessing the
buddy allocator which is a shared resource and requires lock-based
synchronization (also, when freeing a page the same lock is held). In
our tests we observed that that this contention causes the CPUs that
serve the RX queues to reach 100% and damage network performance.
While this contention can be relieved by making sure that pages are
allocated and freed on the same core, which would allow the driver to
take advantage of PCP, this solution is not always available or easy to
maintain.

This patchset implements a page cache local to each RX queue. When the
napi routine allocates a page, it first checks whether the cache has a
previously allocated page that isn't used. If so, this page is fetched
instead of allocating a new one. Otherwise, if the cache is out of free
pages, a page is allocated using normal allocation path (PCP or buddy
allocator) and returned to the caller.
A page that is allocated outside the cache, is afterwards cached, up
to cache's maximum size (set to 2048 pages in this patchset).

The pages' availability is tracked by checking their refcount. A cached
page has a refcount of 2 when it is passed to the napi routine as an RX
buffer. When a refcount of a page reaches 1, the cache assumes that it is
free to be re-used.

To avoid traversing all pages in the cache when looking for an available
page, we only check the availability of the first page fetched for the
RX queue that wasn't returned to the cache yet (i.e. has a refcount of
more than 1).

For example, for a cache of size 8 from which pages at indices 0-7 were
fetched (and placed in the RX SQ), the next time napi would try to fetch
a page from the cache, the cache would check the availability of the
page at index 0, and if it is available, this page would be fetched for
the napi. The next time napi would try to fetch a page, the cache entry
at index 1 would be checked, and so on.

Memory consumption:

In its maximum occupancy the cache would hold 2048 pages per each
queue. Providing an interface with 32 queues, 32 * 2048 * 4K = 64MB
are being used by the driver for its RX queues.

To avoid choking the system, this feature is only enabled for
instances with more than 16 queues which in AWS come with several tens
Gigs of RAM. Moreover, the feature can be turned off completely using
ethtool.

Having said that, the memory cost of having RX queues with 2K entries
would be the same as with 1K entries queue + LPC in worst case, while
the latter allocates the memory only in case the traffic rate is higher
than the rate of the pages being freed.

Performance results:

4 c5n.18xlarge instances sending iperf TCP traffic to a p4d.24xlarge instance.
Packet size: 1500 bytes

c5n.18xlarge specs:
Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz
with 72 cores. 32 queue pairs.

p4d.24xlarge specs:
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
with 96 cores. 4 * 32 = 128 (4 interfaces) queue pairs.

|                     | before | after |
|                     +        +       |
| bandwidth (Gbps)    | 260    | 330   |
| CPU utilization (%) | 100    | 56    |

Shay Agroskin (3):
  net: ena: implement local page cache (LPC) system
  net: ena: update README file with a description about LPC
  net: ena: support ethtool priv-flags and LPC state change

 .../device_drivers/ethernet/amazon/ena.rst    |  28 ++
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |  56 ++-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 369 +++++++++++++++++-
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  32 ++
 4 files changed, 458 insertions(+), 27 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread
* [RFC Patch v1 0/3] Introduce ENA local page cache
@ 2021-03-09 17:07 Shay Agroskin
  2021-03-09 17:07 ` [RFC Patch v1 1/3] net: ena: implement local page cache (LPC) system Shay Agroskin
  0 siblings, 1 reply; 11+ messages in thread
From: Shay Agroskin @ 2021-03-09 17:07 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Tzalik, Guy, Belgazal, Netanel, Saidi, Ali, Herrenschmidt,
	Benjamin, Kiyanovski, Arthur, Jubran, Samih, Dagan, Noam

High incoming pps rate leads to frequent memory allocations by the napi
routine to refill the pages of the incoming packets.

On several new instances in AWS fleet, with high pps rates, these
frequent allocations create a contention between the different napi
routines.
The contention happens because the routines end up accessing the
buddy allocator which is a shared resource and requires lock-based
synchronization (also, when freeing a page the same lock is held). In
our tests we observed that that this contention causes the CPUs that
serve the RX queues to reach 100% and damage network performance.
While this contention can be relieved by making sure that pages are
allocated and freed on the same core, which would allow the driver to
take advantage of PCP, this solution is not always available or easy to
maintain.

This patchset implements a page cache local to each RX queue. When the
napi routine allocates a page, it first checks whether the cache has a
previously allocated page that isn't used. If so, this page is fetched
instead of allocating a new one. Otherwise, if the cache is out of free
pages, a page is allocated using normal allocation path (PCP or buddy
allocator) and returned to the caller.
A page that is allocated outside the cache, is afterwards cached, up
to cache's maximum size (set to 2048 pages in this patchset).

The pages' availability is tracked by checking their refcount. A cached
page has a refcount of 2 when it is passed to the napi routine as an RX
buffer. When a refcount of a page reaches 1, the cache assumes that it is
free to be re-used.

To avoid traversing all pages in the cache when looking for an available
page, we only check the availability of the first page fetched for the
RX queue that wasn't returned to the cache yet (i.e. has a refcount of
more than 1).

For example, for a cache of size 8 from which pages at indices 0-7 were
fetched (and placed in the RX SQ), the next time napi would try to fetch
a page from the cache, the cache would check the availability of the
page at index 0, and if it is available, this page would be fetched for
the napi. The next time napi would try to fetch a page, the cache entry
at index 1 would be checked, and so on.

Memory consumption:

In its maximum occupancy the cache would hold 2048 pages per each
queue. Providing an interface with 32 queues, 32 * 2048 * 4K = 64MB
are being used by the driver for its RX queues.

To avoid choking the system, this feature is only enabled for
instances with more than 16 queues which in AWS come with several tens
Gigs of RAM. Moreover, the feature can be turned off completely using
ethtool.

Having said that, the memory cost of having RX queues with 2K entries
would be the same as with 1K entries queue + LPC in worst case, while
the latter allocates the memory only in case the traffic rate is higher
than the rate of the pages being freed.

Performance results:

4 c5n.18xlarge instances sending iperf TCP traffic to a p4d.24xlarge instance.
Packet size: 1500 bytes

c5n.18xlarge specs:
Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz
with 72 cores. 32 queue pairs.

p4d.24xlarge specs:
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
with 96 cores. 4 * 32 = 128 (4 interfaces) queue pairs.

|                     | before | after |
|                     +        +       |
| bandwidth (Gbps)    | 260    | 330   |
| CPU utilization (%) | 100    | 56    |

Shay Agroskin (3):
  net: ena: implement local page cache (LPC) system
  net: ena: update README file with a description about LPC
  net: ena: support ethtool priv-flags and LPC state change

 .../device_drivers/ethernet/amazon/ena.rst    |  28 ++
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |  56 ++-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 369 +++++++++++++++++-
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  32 ++
 4 files changed, 458 insertions(+), 27 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-03-16  8:27 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-09 17:10 [RFC Patch v1 0/3] Introduce ENA local page cache Shay Agroskin
2021-03-09 17:10 ` [RFC Patch v1 1/3] net: ena: implement local page cache (LPC) system Shay Agroskin
2021-03-09 17:57   ` Eric Dumazet
2021-03-10  2:04     ` Andrew Lunn
2021-03-11 23:15       ` Saeed Mahameed
2021-03-16  8:23         ` Shay Agroskin
2021-03-16  8:26     ` Shay Agroskin
2021-03-09 19:22   ` kernel test robot
2021-03-09 17:10 ` [RFC Patch v1 2/3] net: ena: update README file with a description about LPC Shay Agroskin
2021-03-09 17:10 ` [RFC Patch v1 3/3] net: ena: support ethtool priv-flags and LPC state change Shay Agroskin
  -- strict thread matches above, loose matches on Subject: below --
2021-03-09 17:07 [RFC Patch v1 0/3] Introduce ENA local page cache Shay Agroskin
2021-03-09 17:07 ` [RFC Patch v1 1/3] net: ena: implement local page cache (LPC) system Shay Agroskin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.