All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@suse.de>,
	Christoph Hellwig <hch@infradead.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Hillf Danton <hdanton@sina.com>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Vasily Averin <vvs@virtuozzo.com>
Subject: Re: [PATCH] mm/vmalloc: Eliminate an extra orig_gfp_mask
Date: Thu, 4 Nov 2021 09:59:48 +0100	[thread overview]
Message-ID: <YYOhBGACLb+p1jl0@dhcp22.suse.cz> (raw)
In-Reply-To: <20211103200703.2265-1-urezki@gmail.com>

[Cc Vasily]

On Wed 03-11-21 21:07:03, Uladzislau Rezki wrote:
> That extra variable has been introduced just for keeping an original
> passed gfp_mask because it is updated with __GFP_NOWARN on entry, thus
> error handling messages were broken.

I am not sure what you mean by "error handling messages were broken"
part.

It is true that the current Linus tree has a broken allocation failure
reporting but that is not a fault of orig_gfp_mask. In fact patch which
is fixing that "mm/vmalloc: repair warn_alloc()s in
__vmalloc_area_node()" currently in akpm tree is adding the additional
mask.
 
> Instead we can keep an original gfp_mask without modifying it and add
> an extra __GFP_NOWARN flag together with gfp_mask as a parameter to
> the vm_area_alloc_pages() function. It will make it less confused.

I would tend to agree that this is a better approach. There is already
nested_gfp mask and one more doesn't add to the readbility.

Maybe we should just drop the above patch and just go with one which
doesn't introduce the intermediate step and an additional gfp mask.

> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
>  mm/vmalloc.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d2a00ad4e1dd..3b549ff5c95e 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2920,7 +2920,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  				 int node)
>  {
>  	const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> -	const gfp_t orig_gfp_mask = gfp_mask;
>  	unsigned long addr = (unsigned long)area->addr;
>  	unsigned long size = get_vm_area_size(area);
>  	unsigned long array_size;
> @@ -2928,7 +2927,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  	unsigned int page_order;
>  
>  	array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
> -	gfp_mask |= __GFP_NOWARN;
> +
>  	if (!(gfp_mask & (GFP_DMA | GFP_DMA32)))
>  		gfp_mask |= __GFP_HIGHMEM;
>  
> @@ -2941,7 +2940,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  	}
>  
>  	if (!area->pages) {
> -		warn_alloc(orig_gfp_mask, NULL,
> +		warn_alloc(gfp_mask, NULL,
>  			"vmalloc error: size %lu, failed to allocated page array size %lu",
>  			nr_small_pages * PAGE_SIZE, array_size);
>  		free_vm_area(area);
> @@ -2951,8 +2950,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  	set_vm_area_page_order(area, page_shift - PAGE_SHIFT);
>  	page_order = vm_area_page_order(area);
>  
> -	area->nr_pages = vm_area_alloc_pages(gfp_mask, node,
> -		page_order, nr_small_pages, area->pages);
> +	area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN,
> +		node, page_order, nr_small_pages, area->pages);
>  
>  	atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
>  
> @@ -2961,7 +2960,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  	 * allocation request, free them via __vfree() if any.
>  	 */
>  	if (area->nr_pages != nr_small_pages) {
> -		warn_alloc(orig_gfp_mask, NULL,
> +		warn_alloc(gfp_mask, NULL,
>  			"vmalloc error: size %lu, page order %u, failed to allocate pages",
>  			area->nr_pages * PAGE_SIZE, page_order);
>  		goto fail;
> @@ -2969,7 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  
>  	if (vmap_pages_range(addr, addr + size, prot, area->pages,
>  			page_shift) < 0) {
> -		warn_alloc(orig_gfp_mask, NULL,
> +		warn_alloc(gfp_mask, NULL,
>  			"vmalloc error: size %lu, failed to map pages",
>  			area->nr_pages * PAGE_SIZE);
>  		goto fail;
> -- 
> 2.17.1

-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2021-11-04  8:59 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-03 20:07 [PATCH] mm/vmalloc: Eliminate an extra orig_gfp_mask Uladzislau Rezki (Sony)
2021-11-04  8:59 ` Michal Hocko [this message]
2021-11-04 11:14   ` Uladzislau Rezki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YYOhBGACLb+p1jl0@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@infradead.org \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=npiggin@gmail.com \
    --cc=oleksiy.avramchenko@sonymobile.com \
    --cc=rostedt@goodmis.org \
    --cc=urezki@gmail.com \
    --cc=vvs@virtuozzo.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.