All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tariq Toukan <tariqt@mellanox.com>
To: Mel Gorman <mgorman@techsingularity.net>,
	Tariq Toukan <tariqt@mellanox.com>
Cc: Linux Kernel Network Developers <netdev@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>, David Miller <davem@davemloft.net>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Eric Dumazet <eric.dumazet@gmail.com>,
	Alexei Starovoitov <ast@fb.com>,
	Saeed Mahameed <saeedm@mellanox.com>,
	Eran Ben Elisha <eranbe@mellanox.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>
Subject: Re: Page allocator bottleneck
Date: Wed, 8 Nov 2017 14:42:04 +0900	[thread overview]
Message-ID: <b249f79a-a92e-f2ef-fdd5-3a9b8b6c3f48@mellanox.com> (raw)
In-Reply-To: <20171103134020.3hwquerifnc6k6qw@techsingularity.net>



On 03/11/2017 10:40 PM, Mel Gorman wrote:
> On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
>>
>>
>> On 18/09/2017 12:16 PM, Tariq Toukan wrote:
>>>
>>>
>>> On 15/09/2017 1:23 PM, Mel Gorman wrote:
>>>> On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
>>>>> Insights: Major degradation between #1 and #2, not getting any
>>>>> close to linerate! Degradation is fixed between #2 and #3. This is
>>>>> because page allocator cannot stand the higher allocation rate. In
>>>>> #2, we also see that the addition of rings (cores) reduces BW (!!),
>>>>> as result of increasing congestion over shared resources.
>>>>>
>>>>
>>>> Unfortunately, no surprises there.
>>>>
>>>>> Congestion in this case is very clear. When monitored in perf
>>>>> top: 85.58% [kernel] [k] queued_spin_lock_slowpath
>>>>>
>>>>
>>>> While it's not proven, the most likely candidate is the zone lock
>>>> and that should be confirmed using a call-graph profile. If so, then
>>>> the suggestion to tune to the size of the per-cpu allocator would
>>>> mitigate the problem.
>>>>
>>> Indeed, I tuned the per-cpu allocator and bottleneck is released.
>>>
>>
>> Hi all,
>>
>> After leaving this task for a while doing other tasks, I got back to it now
>> and see that the good behavior I observed earlier was not stable.
>>
>> Recall: I work with a modified driver that allocates a page (4K) per packet
>> (MTU=1500), in order to simulate the stress on page-allocator in 200Gbps
>> NICs.
>>
> 
> There is almost new in the data that hasn't been discussed before. The
> suggestion to free on a remote per-cpu list would be expensive as it would
> require per-cpu lists to have a lock for safe remote access.
That's right, but each such lock will be significantly less congested 
than the buddy allocator lock. In the flow in subject two cores need to 
synchronize (one allocates, one frees).
We also need to evaluate the cost of acquiring and releasing the lock in 
the case of no congestion at all.

>  However,
> I'd be curious if you could test the mm-pagealloc-irqpvec-v1r4 branch
> ttps://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git .  It's an
> unfinished prototype I worked on a few weeks ago. I was going to revisit
> in about a months time when 4.15-rc1 was out. I'd be interested in seeing
> if it has a postive gain in normal page allocations without destroying
> the performance of interrupt and softirq allocation contexts. The
> interrupt/softirq context testing is crucial as that is something that
> hurt us before when trying to improve page allocator performance.
> 
Yes, I will test that once I get back in office (after netdev conference 
and vacation).
Can you please elaborate in a few words about the idea behind the prototype?
Does it address page-allocator scalability issues, or only the rate of 
single core page allocations?

WARNING: multiple messages have this Message-ID (diff)
From: Tariq Toukan <tariqt@mellanox.com>
To: Mel Gorman <mgorman@techsingularity.net>,
	Tariq Toukan <tariqt@mellanox.com>
Cc: Linux Kernel Network Developers <netdev@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>, David Miller <davem@davemloft.net>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Eric Dumazet <eric.dumazet@gmail.com>,
	Alexei Starovoitov <ast@fb.com>,
	Saeed Mahameed <saeedm@mellanox.com>,
	Eran Ben Elisha <eranbe@mellanox.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>
Subject: Re: Page allocator bottleneck
Date: Wed, 8 Nov 2017 14:42:04 +0900	[thread overview]
Message-ID: <b249f79a-a92e-f2ef-fdd5-3a9b8b6c3f48@mellanox.com> (raw)
In-Reply-To: <20171103134020.3hwquerifnc6k6qw@techsingularity.net>



On 03/11/2017 10:40 PM, Mel Gorman wrote:
> On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
>>
>>
>> On 18/09/2017 12:16 PM, Tariq Toukan wrote:
>>>
>>>
>>> On 15/09/2017 1:23 PM, Mel Gorman wrote:
>>>> On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
>>>>> Insights: Major degradation between #1 and #2, not getting any
>>>>> close to linerate! Degradation is fixed between #2 and #3. This is
>>>>> because page allocator cannot stand the higher allocation rate. In
>>>>> #2, we also see that the addition of rings (cores) reduces BW (!!),
>>>>> as result of increasing congestion over shared resources.
>>>>>
>>>>
>>>> Unfortunately, no surprises there.
>>>>
>>>>> Congestion in this case is very clear. When monitored in perf
>>>>> top: 85.58% [kernel] [k] queued_spin_lock_slowpath
>>>>>
>>>>
>>>> While it's not proven, the most likely candidate is the zone lock
>>>> and that should be confirmed using a call-graph profile. If so, then
>>>> the suggestion to tune to the size of the per-cpu allocator would
>>>> mitigate the problem.
>>>>
>>> Indeed, I tuned the per-cpu allocator and bottleneck is released.
>>>
>>
>> Hi all,
>>
>> After leaving this task for a while doing other tasks, I got back to it now
>> and see that the good behavior I observed earlier was not stable.
>>
>> Recall: I work with a modified driver that allocates a page (4K) per packet
>> (MTU=1500), in order to simulate the stress on page-allocator in 200Gbps
>> NICs.
>>
> 
> There is almost new in the data that hasn't been discussed before. The
> suggestion to free on a remote per-cpu list would be expensive as it would
> require per-cpu lists to have a lock for safe remote access.
That's right, but each such lock will be significantly less congested 
than the buddy allocator lock. In the flow in subject two cores need to 
synchronize (one allocates, one frees).
We also need to evaluate the cost of acquiring and releasing the lock in 
the case of no congestion at all.

>  However,
> I'd be curious if you could test the mm-pagealloc-irqpvec-v1r4 branch
> ttps://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git .  It's an
> unfinished prototype I worked on a few weeks ago. I was going to revisit
> in about a months time when 4.15-rc1 was out. I'd be interested in seeing
> if it has a postive gain in normal page allocations without destroying
> the performance of interrupt and softirq allocation contexts. The
> interrupt/softirq context testing is crucial as that is something that
> hurt us before when trying to improve page allocator performance.
> 
Yes, I will test that once I get back in office (after netdev conference 
and vacation).
Can you please elaborate in a few words about the idea behind the prototype?
Does it address page-allocator scalability issues, or only the rate of 
single core page allocations?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-11-08  5:42 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-14 16:49 Page allocator bottleneck Tariq Toukan
2017-09-14 16:49 ` Tariq Toukan
2017-09-14 20:19 ` Andi Kleen
2017-09-14 20:19   ` Andi Kleen
2017-09-17 15:43   ` Tariq Toukan
2017-09-15  7:28 ` Jesper Dangaard Brouer
2017-09-17 16:16   ` Tariq Toukan
2017-09-18  7:34     ` Aaron Lu
2017-09-18  7:44       ` Aaron Lu
2017-09-18 15:33         ` Tariq Toukan
2017-09-19  7:23           ` Aaron Lu
2017-09-19  7:23             ` Aaron Lu
2017-09-15 10:23 ` Mel Gorman
2017-09-18  9:16   ` Tariq Toukan
2017-11-02 17:21     ` Tariq Toukan
2017-11-02 17:21       ` Tariq Toukan
2017-11-03 13:40       ` Mel Gorman
2017-11-08  5:42         ` Tariq Toukan [this message]
2017-11-08  5:42           ` Tariq Toukan
2017-11-08  9:35           ` Mel Gorman
2017-11-09  3:51             ` Figo.zhang
2017-11-09  5:06             ` Tariq Toukan
2017-11-09  5:21             ` Jesper Dangaard Brouer
2017-11-09  5:21               ` Jesper Dangaard Brouer
2018-04-21  8:15       ` Aaron Lu
2018-04-22 16:43         ` Tariq Toukan
2018-04-23  8:54           ` Tariq Toukan
2018-04-23  8:54             ` Tariq Toukan
2018-04-23 13:10             ` Aaron Lu
2018-04-27  8:45               ` Aaron Lu
2018-05-02 13:38                 ` Tariq Toukan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b249f79a-a92e-f2ef-fdd5-3a9b8b6c3f48@mellanox.com \
    --to=tariqt@mellanox.com \
    --cc=akpm@linux-foundation.org \
    --cc=ast@fb.com \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=eranbe@mellanox.com \
    --cc=eric.dumazet@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.