From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D405EC433EF for ; Thu, 9 Dec 2021 09:27:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35AD46B0071; Thu, 9 Dec 2021 04:27:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 30AC46B0073; Thu, 9 Dec 2021 04:27:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F9816B0074; Thu, 9 Dec 2021 04:27:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id 10E5C6B0071 for ; Thu, 9 Dec 2021 04:27:44 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 635148B74E for ; Thu, 9 Dec 2021 09:27:33 +0000 (UTC) X-FDA: 78897727986.21.AC972EE Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf26.hostedemail.com (Postfix) with ESMTP id D61D3140006 for ; Thu, 9 Dec 2021 09:27:31 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R541e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=xuyu@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0V-29lTj_1639042046; Received: from 30.225.28.162(mailfrom:xuyu@linux.alibaba.com fp:SMTPD_---0V-29lTj_1639042046) by smtp.aliyun-inc.com(127.0.0.1); Thu, 09 Dec 2021 17:27:27 +0800 Message-ID: <50ea6251-fbde-10d9-c37c-3198aa9e2d82@linux.alibaba.com> Date: Thu, 9 Dec 2021 17:27:25 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.3.2 Subject: Re: [PATCH] mm/vmalloc: allocate small pages for area->pages Content-Language: en-US To: Nicholas Piggin , linux-mm@kvack.org Cc: akpm@linux-foundation.org References: <1639037882.ddpnbp5ftw.astroid@bobo.none> From: Yu Xu In-Reply-To: <1639037882.ddpnbp5ftw.astroid@bobo.none> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D61D3140006 X-Stat-Signature: uijpzs73quudc7mhro1ni48yhaf83thq Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf26.hostedemail.com: domain of xuyu@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=xuyu@linux.alibaba.com X-HE-Tag: 1639042051-917966 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/9/21 4:23 PM, Nicholas Piggin wrote: > Excerpts from Xu Yu's message of December 7, 2021 7:46 pm: >> The area->pages stores the struct pages allocated for vmalloc mappings. >> The allocated memory can be hugepage if arch has HAVE_ARCH_HUGE_VMALLOC >> set, while area->pages itself does not have to be hugepage backed. >> >> Suppose that we want to vmalloc 1026M of memory, then area->pages is >> 2052K in size, which is large than PMD_SIZE when the pagesize is 4K. >> Currently, 4096K will be allocated for area->pages, wherein 2044K is >> wasted. >> >> This introduces __vmalloc_node_no_huge, and makes area->pages backed by >> small pages, because I think to allocate hugepage for area->pages is >> unnecessary and vulnerable to abuse. > > Any vmalloc allocation will be subject to internal fragmentation like > this. What makes this one special? Is there a way to improve it for > all with some heuristic? As described in the commit log, I think vmalloc memory (*data*) can be hugepage, while the area->pages (*meta*) is unnecessary. There should be heuristic ways, just like THP settings (always, madvise, never). But such heuristic ways are mainly for data allocation, and I'm not sure it's worth it to bring such logic in. > > There would be an argument for a size-optimised vmalloc vs a space > optimised one. An accounting strucutre like this doesn't matter > much for speed. A vfs hash table does. Is it worth doing though? How To be honest, I wrote the patch when studying your patchset. No real issue. > much do you gain in practice? Therefore, no actual gain in practice. Perhaps I should add an RFC tag in the patch. However, I saw that Andrew Morton has added this patch to the -mm tree. I wonder if we need to reconsider this patch. > > Thanks, > Nick > >> >> Signed-off-by: Xu Yu >> --- >> include/linux/vmalloc.h | 2 ++ >> mm/vmalloc.c | 15 ++++++++++++--- >> 2 files changed, 14 insertions(+), 3 deletions(-) >> >> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h >> index 6e022cc712e6..e93f39eb46a5 100644 >> --- a/include/linux/vmalloc.h >> +++ b/include/linux/vmalloc.h >> @@ -150,6 +150,8 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, >> const void *caller) __alloc_size(1); >> void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, >> int node, const void *caller) __alloc_size(1); >> +void *__vmalloc_node_no_huge(unsigned long size, unsigned long align, >> + gfp_t gfp_mask, int node, const void *caller) __alloc_size(1); >> void *vmalloc_no_huge(unsigned long size) __alloc_size(1); >> >> extern void vfree(const void *addr); >> diff --git a/mm/vmalloc.c b/mm/vmalloc.c >> index d2a00ad4e1dd..0bdbb96d3e3f 100644 >> --- a/mm/vmalloc.c >> +++ b/mm/vmalloc.c >> @@ -2925,17 +2925,18 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, >> unsigned long size = get_vm_area_size(area); >> unsigned long array_size; >> unsigned int nr_small_pages = size >> PAGE_SHIFT; >> + unsigned int max_small_pages = ALIGN(size, 1UL << page_shift) >> PAGE_SHIFT; >> unsigned int page_order; >> >> - array_size = (unsigned long)nr_small_pages * sizeof(struct page *); >> + array_size = (unsigned long)max_small_pages * sizeof(struct page *); >> gfp_mask |= __GFP_NOWARN; >> if (!(gfp_mask & (GFP_DMA | GFP_DMA32))) >> gfp_mask |= __GFP_HIGHMEM; >> >> /* Please note that the recursion is strictly bounded. */ >> if (array_size > PAGE_SIZE) { >> - area->pages = __vmalloc_node(array_size, 1, nested_gfp, node, >> - area->caller); >> + area->pages = __vmalloc_node_no_huge(array_size, 1, nested_gfp, >> + node, area->caller); >> } else { >> area->pages = kmalloc_node(array_size, nested_gfp, node); >> } >> @@ -3114,6 +3115,14 @@ void *__vmalloc_node(unsigned long size, unsigned long align, >> return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, >> gfp_mask, PAGE_KERNEL, 0, node, caller); >> } >> + >> +void *__vmalloc_node_no_huge(unsigned long size, unsigned long align, >> + gfp_t gfp_mask, int node, const void *caller) >> +{ >> + return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, >> + gfp_mask, PAGE_KERNEL, VM_NO_HUGE_VMAP, node, caller); >> +} >> + >> /* >> * This is only for performance analysis of vmalloc and stress purpose. >> * It is required by vmalloc test module, therefore do not use it other >> -- >> 2.20.1.2432.ga663e714 >> >> -- Thanks, Yu