All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
To: Michael Ellerman <mpe@ellerman.id.au>, linuxppc-dev@ozlabs.org
Cc: bsingharora@gmail.com, paulus@samba.org
Subject: Re: [PATCH] powerpc/mm/book3s/64: Rework page table geometry for lower memory usage
Date: Tue, 9 May 2017 13:54:33 +0530	[thread overview]
Message-ID: <3f36d44c-a8a9-09d4-70f3-9e6088b9676a@linux.vnet.ibm.com> (raw)
In-Reply-To: <1494317148-18554-1-git-send-email-mpe@ellerman.id.au>



On Tuesday 09 May 2017 01:35 PM, Michael Ellerman wrote:
> Recently in commit f6eedbba7a26 ("powerpc/mm/hash: Increase VA range to 128TB")
> we increased the virtual address space for user processes to 128TB by default,
> and up to 512TB if user space opts in.
>
> This obviously required expanding the range of the Linux page tables. For Book3s
> 64-bit using hash and with PAGE_SIZE=64K, we increased the PGD to 2^15 entries.
> This meant we could cover the full address range, while still being able to
> insert a 16G hugepage at the PGD level and a 16M hugepage in the PMD.
>
> The downside of that geometry is that it uses a lot of memory for the PGD, and
> in particular makes the PGD a 4-page allocation, which means it's much more
> likely to fail under memory pressure.
>
> Instead we can make the PMD larger, so that a single PUD entry maps 16G,
> allowing the 16G hugepages to sit at that level in the tree. We're then able to
> split the remaining bits between the PUG and PGD. We make the PGD slightly
> larger as that results in lower memory usage for typical programs.
>
> When THP is enabled the PMD actually doubles in size, to 2^11 entries, or 2^14
> bytes, which is large but still < PAGE_SIZE.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  arch/powerpc/include/asm/book3s/64/hash-64k.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> index 214219dff87c..9732837aaae8 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> @@ -2,9 +2,9 @@
>  #define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
>
>  #define H_PTE_INDEX_SIZE  8
> -#define H_PMD_INDEX_SIZE  5
> -#define H_PUD_INDEX_SIZE  5
> -#define H_PGD_INDEX_SIZE  15
> +#define H_PMD_INDEX_SIZE  10
> +#define H_PUD_INDEX_SIZE  7
> +#define H_PGD_INDEX_SIZE  8
>
>  /*
>   * 64k aligned address free up few of the lower bits of RPN for us
>

  reply	other threads:[~2017-05-09  8:24 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-09  8:05 [PATCH] powerpc/mm/book3s/64: Rework page table geometry for lower memory usage Michael Ellerman
2017-05-09  8:24 ` Aneesh Kumar K.V [this message]
2017-05-09  8:43 ` Balbir Singh
2017-05-15  5:06 ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f36d44c-a8a9-09d4-70f3-9e6088b9676a@linux.vnet.ibm.com \
    --to=aneesh.kumar@linux.vnet.ibm.com \
    --cc=bsingharora@gmail.com \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.