linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Oscar Salvador <osalvador@suse.de>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>,
	Michal Hocko <mhocko@kernel.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] x86/vmemmap: Remove !PAGE_ALIGNED case in remove_pte_table
Date: Tue, 2 Feb 2021 14:20:36 +0100	[thread overview]
Message-ID: <1d8ca669-6c92-4692-b2c7-1bfaa6f9707e@redhat.com> (raw)
In-Reply-To: <20210202112450.11932-2-osalvador@suse.de>

> @@ -962,7 +962,6 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
>   {
>   	unsigned long next, pages = 0;
>   	pte_t *pte;
> -	void *page_addr;
>   	phys_addr_t phys_addr;
>   
>   	pte = pte_start + pte_index(addr);
> @@ -983,42 +982,19 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
>   		if (phys_addr < (phys_addr_t)0x40000000)
>   			return;
>   
> -		if (PAGE_ALIGNED(addr) && PAGE_ALIGNED(next)) {
> -			/*
> -			 * Do not free direct mapping pages since they were
> -			 * freed when offlining, or simplely not in use.
> -			 */
> -			if (!direct)
> -				free_pagetable(pte_page(*pte), 0);
> -
> -			spin_lock(&init_mm.page_table_lock);
> -			pte_clear(&init_mm, addr, pte);
> -			spin_unlock(&init_mm.page_table_lock);
> -
> -			/* For non-direct mapping, pages means nothing. */
> -			pages++;
> -		} else {
> -			/*
> -			 * If we are here, we are freeing vmemmap pages since
> -			 * direct mapped memory ranges to be freed are aligned.
> -			 *
> -			 * If we are not removing the whole page, it means
> -			 * other page structs in this page are being used and
> -			 * we canot remove them. So fill the unused page_structs
> -			 * with 0xFD, and remove the page when it is wholly
> -			 * filled with 0xFD.
> -			 */
> -			memset((void *)addr, PAGE_INUSE, next - addr);
> +		/*
> +		 * Do not free direct mapping pages since they were
> +		 * freed when offlining, or simplely not in use.
> +		 */

s/simplely/simply/ (I know, not your fault :) )

However, that comment is highly confusing. There is nothing to free in 
case of the a direct mapping; we never allocated anything. That's how a 
direct map works.

I'd just get rid of the comment completely - we also don't have one at 
the other "if (!direct)" places.


(side note: all this freeing before unmapping looks very weird, but at 
least we should have valid accesses anymore)

> +		if (!direct)
> +			free_pagetable(pte_page(*pte), 0);

[...]

>   {
> +	/*
> +	 * See comment in vmemmap_populate().
> +	 */

I'd drop this comment ...

> +	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> +	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +
>   	remove_pagetable(start, end, false, altmap);
>   }
>   
> @@ -1556,6 +1538,15 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>   {
>   	int err;
>   
> +	/*
> +	 * __populate_section_memmap enforces the range to be added to be
> +	 * PMD aligned. Therefore we can be sure that, as long as the
> +	 * struct page size is multiple of 8, the vmemmap range will be
> +	 * PAGE aligned.
> +	 */

... and this comment, moving the details into the patch description.

The commit should be easy to detect using git blame in case anybody 
wonders why.

> +	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> +	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +
>   	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
>   		err = vmemmap_populate_basepages(start, end, node, NULL);
>   	else if (boot_cpu_has(X86_FEATURE_PSE))
> 

Apart from that looks good, thanks!

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-02-02 17:33 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-02 11:24 [PATCH 0/2] Cleanup and fixups for vmemmap handling Oscar Salvador
2021-02-02 11:24 ` [PATCH 1/2] x86/vmemmap: Remove !PAGE_ALIGNED case in remove_pte_table Oscar Salvador
2021-02-02 13:20   ` David Hildenbrand [this message]
2021-02-02 11:24 ` [PATCH 2/2] x86/vmemmap: Handle unpopulated sub-pmd ranges Oscar Salvador
2021-02-02 13:29   ` David Hildenbrand
2021-02-02 13:52     ` Oscar Salvador
2021-02-02 20:17   ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d8ca669-6c92-4692-b2c7-1bfaa6f9707e@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mhocko@kernel.org \
    --cc=mingo@redhat.com \
    --cc=osalvador@suse.de \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).