All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Tariq Toukan <tariqt@mellanox.com>
Cc: David Miller <davem@davemloft.net>,
	Mel Gorman <mgorman@techsingularity.net>,
	Eric Dumazet <eric.dumazet@gmail.com>,
	Alexei Starovoitov <ast@fb.com>,
	Saeed Mahameed <saeedm@mellanox.com>,
	Eran Ben Elisha <eranbe@mellanox.com>,
	Linux Kernel Network Developers <netdev@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>, linux-mm <linux-mm@kvack.org>,
	brouer@redhat.com
Subject: Re: Page allocator bottleneck
Date: Fri, 15 Sep 2017 09:28:39 +0200	[thread overview]
Message-ID: <20170915092839.690ea9e9@redhat.com> (raw)
In-Reply-To: <cef85936-10b2-5d76-9f97-cb03b418fd94@mellanox.com>

On Thu, 14 Sep 2017 19:49:31 +0300
Tariq Toukan <tariqt@mellanox.com> wrote:

> Hi all,
> 
> As part of the efforts to support increasing next-generation NIC speeds,
> I am investigating SW bottlenecks in network stack receive flow.
> 
> Here I share some numbers I got for a simple experiment, in which I 
> simulate the page allocation rate needed in 200Gpbs NICs.

Thanks for bringing this up again. 

> I ran the test below over 3 different (modified) mlx5 driver versions,
> loaded on server side (RX):
> 1) RX page cache disabled, 2 packets per page.

2 packets per page basically reduce the overhead you see from the page
allocator to half.

> 2) RX page cache disabled, one packet per page.

This, should stress the page allocator.

> 3) Huge RX page cache, one packet per page.

A driver level page-cache will look nice, as long as it "works".  

Drivers usually have no other option than basing their recycle facility
to be based on the page-refcnt (as there is no destructor callback).
Which implies packets/pages need to be returned quickly enough for it
to work.

> All page allocations are of order 0.
> 
> NIC: Connectx-5 100 Gbps.
> CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
> 
> Test:
> 128 TCP streams (using super_netperf).
> Changing num of RX queues.
> HW LRO OFF, GRO ON, MTU 1500.

With TCP streams and GRO, is actually a good stress test for the page
allocator (or drivers page-recycle cache). As  Eric Dumazet have made
some nice optimizations, that (in most situations) cause us to quickly
free/recycle the SKB (coming from driver) and store the pages in 1-SKB.
This cause us to hit the SLUB fastpath for the SKBs, but once the pages
need to be free'ed this stress the page allocator more.

Also be aware that with TCP flows, the packets are likely delivered
into a socket, that is consumed on another CPU.  Thus, the pages are
allocated on one CPU and free'ed on another. AFAIK this stress the
order-0 cache PCP (Per-Cpu-Pages).


> Observe: BW as a function of num RX queues.
> 
> Results:
> 
> Driver #1:
> #rings	BW (Mbps)
> 1	23,813
> 2	44,086
> 3	62,128
> 4	78,058
> 6	94,210 (linerate)
> 8	94,205 (linerate)
> 12	94,202 (linerate)
> 16	94,191 (linerate)
> 
> Driver #2:
> #rings	BW (Mbps)
> 1	18,835
> 2	36,716
> 3	50,521
> 4	61,746
> 6	63,637
> 8	60,299
> 12	51,048
> 16	43,337
> 
> Driver #3:
> #rings	BW (Mbps)
> 1	19,316
> 2	44,850
> 3	69,549
> 4	87,434
> 6	94,342 (linerate)
> 8	94,350 (linerate)
> 12	94,327 (linerate)
> 16	94,327 (linerate)
> 
> 
> Insights:
> Major degradation between #1 and #2, not getting any close to linerate!
> Degradation is fixed between #2 and #3.
> This is because page allocator cannot stand the higher allocation rate.
> In #2, we also see that the addition of rings (cores) reduces BW (!!), 
> as result of increasing congestion over shared resources.
> 
> Congestion in this case is very clear.
> When monitored in perf top:
> 85.58% [kernel] [k] queued_spin_lock_slowpath

Well, we obviously need to know the caller of the spin_lock.  In this
case it is likely the page allocator lock.  It could also be the TCP
socket locks, but given GRO is enabled, they should be hit much less.


> I think that page allocator issues should be discussed separately:
> 1) Rate: Increase the allocation rate on a single core.
> 2) Scalability: Reduce congestion and sync overhead between cores.

Yes, but this no small task.  I is on my TODO-list (emacs org-mode),
but I have other tasks that have higher priority atm.  I'll be working
on XDP_REDIRECT for the next many months.  Currently trying to convince
people that we do an explicit packet-page return/free callback (which
would avoid many of these issues).


> This is clearly the current bottleneck in the network stack receive
> flow.
> 
> I know about some efforts that were made in the past two years.
> For example the ones from Jesper et al.:
>
> - Page-pool (not accepted AFAIK).

The page-pool have many purposes.
 1. generic page-cache for drivers,
 2. keep pages DMA-mapped
 3. facilitate drivers to change RX-ring memory model

From a MM-point-of-view the page pool is just a destructor callback,
that can "steal" the page.

If I can convince XDP_REDIRECT to use an explicit destructor callback,
then I almost get what I need.  Except for the generic part, and the
normal network path will not see the benefit.  Thus, not helping your
use-case, I guess.


> - Page-allocation bulking.

Notice, that page-allocator bulking, would still be needed by the
page-pool and other page-cache facilities. We should implement it
regardless of the page_pool.  

Without a page pool facility to hide the use of page bulking.  You
could use page-bulk-alloc in driver RX-ring refill, and find where TCP
free the GRO packets, and do page-bulk-free there.


> - Optimize order-0 allocations in Per-Cpu-Pages.

There is a need to optimize PCP some more for the single-core XDP
performance target (~14Mpps).  I guess, the easiest way around this is
implement/integrate a page bulk API into PCP.

The TCP-GRO use-case you are hitting is a different bottleneck.
It is a multi-CPU parallel workload, that exceed the PCP cache size,
and cause you to hit the page buddy allocator.

I wonder if you could "solve"/mitigate the issue if you tune the size
of the PCP cache?
AFAIK it only keeps 128 pages cached per CPU... I know you can see this
via a proc file, but I cannot remember which(?).  And I'm not sure how
you tune this(?)


> I am not an mm expert, but wanted to raise the issue again, to combine 
> the efforts and hear from you guys about status and possible directions.

Regarding recent changes... if you have you kernel compiled with
CONFIG_NUMA then the page-allocator is slower (due to keeping
numa-stats), except that this was recently optimized and merged(?)

What (exact) kernel git tree did you run these tests on?

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2017-09-15  7:28 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-14 16:49 Page allocator bottleneck Tariq Toukan
2017-09-14 16:49 ` Tariq Toukan
2017-09-14 20:19 ` Andi Kleen
2017-09-14 20:19   ` Andi Kleen
2017-09-17 15:43   ` Tariq Toukan
2017-09-15  7:28 ` Jesper Dangaard Brouer [this message]
2017-09-17 16:16   ` Tariq Toukan
2017-09-18  7:34     ` Aaron Lu
2017-09-18  7:44       ` Aaron Lu
2017-09-18 15:33         ` Tariq Toukan
2017-09-19  7:23           ` Aaron Lu
2017-09-19  7:23             ` Aaron Lu
2017-09-15 10:23 ` Mel Gorman
2017-09-18  9:16   ` Tariq Toukan
2017-11-02 17:21     ` Tariq Toukan
2017-11-02 17:21       ` Tariq Toukan
2017-11-03 13:40       ` Mel Gorman
2017-11-08  5:42         ` Tariq Toukan
2017-11-08  5:42           ` Tariq Toukan
2017-11-08  9:35           ` Mel Gorman
2017-11-09  3:51             ` Figo.zhang
2017-11-09  5:06             ` Tariq Toukan
2017-11-09  5:21             ` Jesper Dangaard Brouer
2017-11-09  5:21               ` Jesper Dangaard Brouer
2018-04-21  8:15       ` Aaron Lu
2018-04-22 16:43         ` Tariq Toukan
2018-04-23  8:54           ` Tariq Toukan
2018-04-23  8:54             ` Tariq Toukan
2018-04-23 13:10             ` Aaron Lu
2018-04-27  8:45               ` Aaron Lu
2018-05-02 13:38                 ` Tariq Toukan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170915092839.690ea9e9@redhat.com \
    --to=brouer@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ast@fb.com \
    --cc=davem@davemloft.net \
    --cc=eranbe@mellanox.com \
    --cc=eric.dumazet@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.