All of lore.kernel.org
 help / color / mirror / Atom feed
From: ard.biesheuvel@linaro.org (Ard Biesheuvel)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 09/13] arm64: mm: explicitly bootstrap the linear mapping
Date: Thu, 7 May 2015 21:21:28 +0200	[thread overview]
Message-ID: <CAKv+Gu-26A1pawUoeY28mXb1WzhPv4O0Rh433Yj9uBjxFh0MWg@mail.gmail.com> (raw)
In-Reply-To: <20150507165416.GB11067@e104818-lin.cambridge.arm.com>

On 7 May 2015 at 18:54, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Wed, Apr 15, 2015 at 05:34:20PM +0200, Ard Biesheuvel wrote:
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>> index ceec4def354b..338eaa7bcbfd 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -68,6 +68,17 @@ PECOFF_FILE_ALIGNMENT = 0x200;
>>  #define ALIGN_DEBUG_RO_MIN(min)              . = ALIGN(min);
>>  #endif
>>
>> +/*
>> + * The pgdir region needs to be mappable using a single PMD or PUD sized region,
>> + * so it should not cross a 512 MB or 1 GB alignment boundary, respectively
>> + * (depending on page size). So align to an upper bound of its size.
>> + */
>> +#if CONFIG_ARM64_PGTABLE_LEVELS == 2
>> +#define PGDIR_ALIGN  (8 * PAGE_SIZE)
>> +#else
>> +#define PGDIR_ALIGN  (16 * PAGE_SIZE)
>> +#endif
>
> Isn't 8 pages sufficient in both cases? Unless some other patch changes
> the idmap and swapper, I can count maximum 7 pages in total.
>

The preceding patch moves the fixmap page tables to this region as well.
But the logic is still incorrect -> we only need 16x for 4 levels (7 +
3 == 10), the remaining ones are all <= 8

>> +
>>  SECTIONS
>>  {
>>       /*
>> @@ -160,7 +171,7 @@ SECTIONS
>>
>>       BSS_SECTION(0, 0, 0)
>>
>> -     .pgdir (NOLOAD) : ALIGN(PAGE_SIZE) {
>> +     .pgdir (NOLOAD) : ALIGN(PGDIR_ALIGN) {
>>               idmap_pg_dir = .;
>>               . += IDMAP_DIR_SIZE;
>>               swapper_pg_dir = .;
>> @@ -185,6 +196,11 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
>>       "ID map text too big or misaligned")
>>
>>  /*
>> + * Check that the chosen PGDIR_ALIGN value if sufficient.
>> + */
>> +ASSERT(SIZEOF(.pgdir) < ALIGNOF(.pgdir), ".pgdir size exceeds its alignment")
>> +
>> +/*
>>   * If padding is applied before .head.text, virt<->phys conversions will fail.
>>   */
>>  ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index c27ab20a5ba9..93e5a2497f01 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -380,26 +380,68 @@ static void __init bootstrap_early_mapping(unsigned long addr,
>>       }
>>  }
>>
>> +static void __init bootstrap_linear_mapping(unsigned long va_offset)
>> +{
>> +     /*
>> +      * Bootstrap the linear range that covers swapper_pg_dir so that the
>> +      * statically allocated page tables as well as newly allocated ones
>> +      * are accessible via the linear mapping.
>> +      */
>
> Just move the comment outside the function.
>

OK

>> +     static struct bootstrap_pgtables linear_bs_pgtables __pgdir;
>> +     const phys_addr_t swapper_phys = __pa(swapper_pg_dir);
>> +     unsigned long swapper_virt = __phys_to_virt(swapper_phys) + va_offset;
>> +     struct memblock_region *reg;
>> +
>> +     bootstrap_early_mapping(swapper_virt, &linear_bs_pgtables,
>> +                             IS_ENABLED(CONFIG_ARM64_64K_PAGES));
>> +
>> +     /* now find the memblock that covers swapper_pg_dir, and clip */
>> +     for_each_memblock(memory, reg) {
>> +             phys_addr_t start = reg->base;
>> +             phys_addr_t end = start + reg->size;
>> +             unsigned long vstart, vend;
>> +
>> +             if (start > swapper_phys || end <= swapper_phys)
>> +                     continue;
>> +
>> +#ifdef CONFIG_ARM64_64K_PAGES
>> +             /* clip the region to PMD size */
>> +             vstart = max(swapper_virt & PMD_MASK,
>> +                          round_up(__phys_to_virt(start + va_offset),
>> +                                   PAGE_SIZE));
>> +             vend = min(round_up(swapper_virt, PMD_SIZE),
>> +                        round_down(__phys_to_virt(end + va_offset),
>> +                                   PAGE_SIZE));
>> +#else
>> +             /* clip the region to PUD size */
>> +             vstart = max(swapper_virt & PUD_MASK,
>> +                          round_up(__phys_to_virt(start + va_offset),
>> +                                   PMD_SIZE));
>> +             vend = min(round_up(swapper_virt, PUD_SIZE),
>> +                        round_down(__phys_to_virt(end + va_offset),
>> +                                   PMD_SIZE));
>> +#endif
>> +
>> +             create_mapping(__pa(vstart - va_offset), vstart, vend - vstart,
>> +                            PAGE_KERNEL_EXEC);
>> +
>> +             /*
>> +              * Temporarily limit the memblock range. We need to do this as
>> +              * create_mapping requires puds, pmds and ptes to be allocated
>> +              * from memory addressable from the early linear mapping.
>> +              */
>> +             memblock_set_current_limit(__pa(vend - va_offset));
>> +
>> +             return;
>> +     }
>> +     BUG();
>> +}
>
> I'll probably revisit this function after I see the whole series. But in
> the meantime, if the kernel is not loaded in the first memblock (in
> address order), isn't there a risk that we allocate memory from the
> first memblock which is not mapped yet?
>

memblock allocates top down, so it should only allocate from this
region, unless the remaining room is completely reserved. I think that
is a theoretical problem which exists currently as well, i.e., the
boot protocol does not mandate that the 512MB/1GB region containing
the kernel contains unreserved room.

-- 
Ard.

  reply	other threads:[~2015-05-07 19:21 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-15 15:34 [PATCH v4 00/13] arm64: update/clarify/relax Image and FDT placement rules Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 01/13] arm64: reduce ID map to a single page Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 02/13] arm64: drop sleep_idmap_phys and clean up cpu_resume() Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 03/13] of/fdt: split off FDT self reservation from memreserve processing Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 04/13] arm64: use fixmap region for permanent FDT mapping Ard Biesheuvel
2015-04-17 15:13   ` Mark Rutland
2015-04-15 15:34 ` [PATCH v4 05/13] arm64/efi: adapt to relaxed FDT placement requirements Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 06/13] arm64: implement our own early_init_dt_add_memory_arch() Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 07/13] arm64: use more granular reservations for static page table allocations Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 08/13] arm64: split off early mapping code from early_fixmap_init() Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 09/13] arm64: mm: explicitly bootstrap the linear mapping Ard Biesheuvel
2015-05-07 16:54   ` Catalin Marinas
2015-05-07 19:21     ` Ard Biesheuvel [this message]
2015-05-08 14:44       ` Catalin Marinas
2015-05-08 15:03         ` Ard Biesheuvel
2015-05-08 16:43           ` Catalin Marinas
2015-05-08 16:59             ` Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 10/13] arm64: move kernel mapping out of linear region Ard Biesheuvel
2015-05-08 17:16   ` Catalin Marinas
2015-05-08 17:26     ` Ard Biesheuvel
2015-05-08 17:27       ` Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 11/13] arm64: map linear region as non-executable Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 12/13] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
2015-04-15 15:34 ` [PATCH v4 13/13] arm64/efi: adapt to relaxed kernel Image placement requirements Ard Biesheuvel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKv+Gu-26A1pawUoeY28mXb1WzhPv4O0Rh433Yj9uBjxFh0MWg@mail.gmail.com \
    --to=ard.biesheuvel@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.