linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Gang Li <gang.li@linux.dev>
Cc: David Hildenbrand <david@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Gang Li <ligang.bdlg@bytedance.com>
Subject: Re: [PATCH v4 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization
Date: Tue, 23 Jan 2024 11:32:28 +0800	[thread overview]
Message-ID: <C79B8BB3-C1F8-4DFA-A084-C4B47486681F@linux.dev> (raw)
In-Reply-To: <829fb129-f643-4960-a2da-cd38e5ee8f39@linux.dev>



> On Jan 23, 2024, at 10:12, Gang Li <gang.li@linux.dev> wrote:
> 
> On 2024/1/22 19:30, Muchun Song wrote:
>>> On Jan 22, 2024, at 18:12, Gang Li <gang.li@linux.dev> wrote:
>>> 
>>> On 2024/1/22 15:10, Muchun Song wrote:> On 2024/1/18 20:39, Gang Li wrote:
>>>>> +static void __init hugetlb_alloc_node(unsigned long start, unsigned long end, void *arg)
>>>>>   {
>>>>> -    unsigned long i;
>>>>> +    struct hstate *h = (struct hstate *)arg;
>>>>> +    int i, num = end - start;
>>>>> +    nodemask_t node_alloc_noretry;
>>>>> +    unsigned long flags;
>>>>> +    int next_node = 0;
>>>> This should be first_online_node which may be not zero.
>>> 
>>> That's right. Thanks!
>>> 
>>>>> -    for (i = 0; i < h->max_huge_pages; ++i) {
>>>>> -        if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE))
>>>>> +    /* Bit mask controlling how hard we retry per-node allocations.*/
>>>>> +    nodes_clear(node_alloc_noretry);
>>>>> +
>>>>> +    for (i = 0; i < num; ++i) {
>>>>> +        struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
>>>>> +                        &node_alloc_noretry, &next_node);
>>>>> +        if (!folio)
>>>>>               break;
>>>>> +        spin_lock_irqsave(&hugetlb_lock, flags);
>>>>> I suspect there will more contention on this lock when parallelizing.
>>> 
>>> In the worst case, there are only 'numa node number' of threads in
>>> contention. And in my testing, it doesn't degrade performance, but
>>> rather improves performance due to the reduced granularity.
>> So, the performance does not change if you move the lock out of
>> loop?
>> 
> 
> If we move the lock out of loop, then multi-threading becomes single-threading, which definitely reduces performance.

No. I mean batching the pages into pool list just like prep_and_add_allocated_folios
does.

> 
> ```
> +       spin_lock_irqsave(&hugetlb_lock, flags);
>        for (i = 0; i < num; ++i) {
>                struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
>                                                &node_alloc_noretry, &next_node);
>                if (!folio)
>                        break;
> -               spin_lock_irqsave(&hugetlb_lock, flags);
>                __prep_account_new_huge_page(h, folio_nid(folio));
>                enqueue_hugetlb_folio(h, folio);
> -               spin_unlock_irqrestore(&hugetlb_lock, flags);
>                cond_resched();
>        }
> +       spin_unlock_irqrestore(&hugetlb_lock, flags);
> }
> ```




  reply	other threads:[~2024-01-23  3:33 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-18 12:39 [RESEND PATCH v4 0/7] hugetlb: parallelize hugetlb page init on boot Gang Li
2024-01-18 12:39 ` [PATCH v4 1/7] hugetlb: code clean for hugetlb_hstate_alloc_pages Gang Li
2024-01-18 12:39 ` [PATCH v4 2/7] hugetlb: split hugetlb_hstate_alloc_pages Gang Li
2024-01-22  3:43   ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 3/7] padata: dispatch works on different nodes Gang Li
2024-01-18 23:04   ` Tim Chen
2024-01-19 15:05     ` Gang Li
2024-01-19  2:59   ` Muchun Song
2024-01-19 15:04     ` Gang Li
2024-01-18 12:39 ` [PATCH v4 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc Gang Li
2024-01-18 23:01   ` Tim Chen
2024-01-19  2:54   ` Muchun Song
2024-01-22  6:16   ` Muchun Song
2024-01-22  9:14     ` Gang Li
2024-01-22  9:50       ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 5/7] hugetlb: have CONFIG_HUGETLBFS select CONFIG_PADATA Gang Li
2024-01-18 12:39 ` [PATCH v4 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization Gang Li
2024-01-22  7:10   ` Muchun Song
2024-01-22 10:12     ` Gang Li
2024-01-22 11:30       ` Muchun Song
2024-01-23  2:12         ` Gang Li
2024-01-23  3:32           ` Muchun Song [this message]
2024-01-18 12:39 ` [PATCH v4 7/7] hugetlb: parallelize 1G hugetlb initialization Gang Li
2024-01-18 14:22   ` Kefeng Wang
2024-01-19 14:45     ` Gang Li
2024-01-24  9:23   ` Muchun Song
2024-01-24 10:52     ` Gang Li
2024-01-25  2:48       ` Muchun Song
2024-01-25  3:47         ` Gang Li
2024-01-25  3:56         ` Gang Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C79B8BB3-C1F8-4DFA-A084-C4B47486681F@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=gang.li@linux.dev \
    --cc=ligang.bdlg@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=rientjes@google.com \
    --cc=tim.c.chen@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).