All of lore.kernel.org
 help / color / mirror / Atom feed
From: Geert Uytterhoeven <geert@linux-m68k.org>
To: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Steve Capper <steve.capper@arm.com>,
	Will Deacon <will@kernel.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>
Subject: Re: [PATCH v2 3/4] arm64: mm: make vmemmap region a projection of the linear region
Date: Tue, 10 Nov 2020 15:56:38 +0100	[thread overview]
Message-ID: <CAMuHMdVhkNaD_AbJR3gmWLmP3Kn0FRphZmZgHno=oz1MQ2ZmeA@mail.gmail.com> (raw)
In-Reply-To: <CAMj1kXHBC=CQC0YbuPeiVt-VDNQLNDDpCxvoJmcURbN4qK9QNw@mail.gmail.com>

Hi Ard,

On Tue, Nov 10, 2020 at 3:09 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> On Tue, 10 Nov 2020 at 14:10, Ard Biesheuvel <ardb@kernel.org> wrote:
> > On Tue, 10 Nov 2020 at 13:55, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > > On Thu, Oct 8, 2020 at 5:43 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> > > > Now that we have reverted the introduction of the vmemmap struct page
> > > > pointer and the separate physvirt_offset, we can simplify things further,
> > > > and place the vmemmap region in the VA space in such a way that virtual
> > > > to page translations and vice versa can be implemented using a single
> > > > arithmetic shift.
> > > >
> > > > One happy coincidence resulting from this is that the 48-bit/4k and
> > > > 52-bit/64k configurations (which are assumed to be the two most
> > > > prevalent) end up with the same placement of the vmemmap region. In
> > > > a subsequent patch, we will take advantage of this, and unify the
> > > > memory maps even more.
> > > >
> > > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > >
> > > This is now commit 8c96400d6a39be76 ("arm64: mm: make vmemmap region a
> > > projection of the linear region") in arm64/for-next/core.
> > >
> > > > --- a/arch/arm64/mm/init.c
> > > > +++ b/arch/arm64/mm/init.c
> > > > @@ -504,6 +504,8 @@ static void __init free_unused_memmap(void)
> > > >   */
> > > >  void __init mem_init(void)
> > > >  {
> > > > +       BUILD_BUG_ON(!is_power_of_2(sizeof(struct page)));
> > >
> > > This check is triggering for me.
> > >
> > > If CONFIG_MEMCG=n, sizeof(struct page) = 56.
> > >
> > > If CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y, this is mitigated by
> > > the explicit alignment:
> > >
> > >     #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
> > >     #define _struct_page_alignment  __aligned(2 * sizeof(unsigned long))
> > >     #else
> > >     #define _struct_page_alignment
> > >     #endif
> > >
> > >     struct page { ... } _struct_page_alignment;
> > >
> > > However, HAVE_ALIGNED_STRUCT_PAGE is selected only if SLUB,
> > > while my .config is using SLAB.
> > >
> >
> > Thanks for the report. I will look into this.
>
> OK, so we can obviously fix this easily by setting
> CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y unconditionally instead of only 'if
> SLUB'. The question is whether that is likely to lead to any
> performance regressions.
>
> So first of all, having a smaller struct page means we can fit more of
> them into memory. On a 4k pages config with SPARSEMEM_VMEMMAP enabled
> (which allocates struct pages in 2M blocks), every 2M block can cover
> 146 MB of DRAM instead of 128 MB. I'm not sure what kind of DRAM
> arrangement would be needed to take advantage of this in practice,
> though.

So this starts making a difference only for systems with more than 1 GiB
RAM, where we can probably afford losing 2 MiB.

> Another aspect is D-cache utilization: cache lines are typically 64
> bytes on arm64, and while we can improve Dcache utilization in theory
> (by virtue of the smaller size), the random access nature of struct
> pages may well result in the opposite, given that 3 out of 4 struct
> pages now straddle two cachelines.
>
> Given the above, and given the purpose of this patch series, which was
> to tidy up and unify different configurations, in order to reduce the
> size of the validation matrix, I think it would be reasonable to
> simply set CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y for arm64 in all cases.

Thanks, sounds reasonable to me.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-11-10 14:57 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08 15:35 [PATCH v2 0/4] arm64: mm: optimize VA space organization for 52-bit Ard Biesheuvel
2020-10-08 15:35 ` [PATCH v2 1/4] arm64: mm: use single quantity to represent the PA to VA translation Ard Biesheuvel
2020-10-13 16:14   ` Steve Capper
2020-10-13 16:47   ` Steve Capper
2020-10-15 10:47   ` Will Deacon
2020-10-08 15:36 ` [PATCH v2 2/4] arm64: mm: extend linear region for 52-bit VA configurations Ard Biesheuvel
2020-10-13 16:51   ` Steve Capper
2020-10-13 16:57     ` Ard Biesheuvel
2020-10-13 17:38       ` Steve Capper
2020-10-14  3:44   ` Anshuman Khandual
2020-10-14  7:18     ` Ard Biesheuvel
2020-10-08 15:36 ` [PATCH v2 3/4] arm64: mm: make vmemmap region a projection of the linear region Ard Biesheuvel
2020-10-13 16:52   ` Steve Capper
2020-11-10 12:55   ` Geert Uytterhoeven
2020-11-10 13:10     ` Ard Biesheuvel
2020-11-10 14:08       ` Ard Biesheuvel
2020-11-10 14:56         ` Geert Uytterhoeven [this message]
2020-11-10 15:39         ` Catalin Marinas
2020-11-10 15:42           ` Ard Biesheuvel
2020-11-10 16:14             ` Catalin Marinas
2020-11-10 16:18               ` Ard Biesheuvel
2020-10-08 15:36 ` [PATCH v2 4/4] arm64: mm: tidy up top of kernel VA space Ard Biesheuvel
2020-10-13 16:52   ` Steve Capper
2020-10-09 14:16 ` [PATCH v2 0/4] arm64: mm: optimize VA space organization for 52-bit Ard Biesheuvel
2020-10-15 20:40 ` Will Deacon
2020-11-09 18:51 ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMuHMdVhkNaD_AbJR3gmWLmP3Kn0FRphZmZgHno=oz1MQ2ZmeA@mail.gmail.com' \
    --to=geert@linux-m68k.org \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=steve.capper@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.