linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: Make sure permission updates happen for pmd/pud
@ 2018-05-22 23:50 Laura Abbott
  2018-05-23 10:51 ` Will Deacon
  0 siblings, 1 reply; 2+ messages in thread
From: Laura Abbott @ 2018-05-22 23:50 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ard Biesheuvel
  Cc: Laura Abbott, linux-arm-kernel, linux-kernel, Kees Cook, Peter Robinson

Commit 15122ee2c515 ("arm64: Enforce BBM for huge IO/VMAP mappings")
disallowed block mappings for ioremap since that code does not honor
break-before-make. The same APIs are also used for permission updating
though and the extra checks prevent the permission updates from happening,
even though this should be permitted. This results in read-only permissions
not being fully applied. Visibly, this can occasionaly be seen as a failure
on the built in rodata test when the test data ends up in a section or
as an odd RW gap on the page table dump. Fix this by keeping the check
for the top level p*d_set_huge APIs but using separate functions for the
update APIs.

Reported-by: Peter Robinson <pbrobinson@gmail.com>
Fixes: 15122ee2c515 ("arm64: Enforce BBM for huge IO/VMAP mappings")
Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 arch/arm64/mm/mmu.c | 28 ++++++++++++++++++++--------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2dbb2c9f1ec1..57517ad86910 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -66,6 +66,9 @@ static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
 static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
 static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
 
+static void __pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot);
+static void __pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot);
+
 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 			      unsigned long size, pgprot_t vma_prot)
 {
@@ -200,7 +203,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 		/* try section mapping first */
 		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
 		    (flags & NO_BLOCK_MAPPINGS) == 0) {
-			pmd_set_huge(pmdp, phys, prot);
+			__pmd_set_huge(pmdp, phys, prot);
 
 			/*
 			 * After the PMD entry has been populated once, we
@@ -299,7 +302,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 		 */
 		if (use_1G_block(addr, next, phys) &&
 		    (flags & NO_BLOCK_MAPPINGS) == 0) {
-			pud_set_huge(pudp, phys, prot);
+			__pud_set_huge(pudp, phys, prot);
 
 			/*
 			 * After the PUD entry has been populated once, we
@@ -929,31 +932,40 @@ int __init arch_ioremap_pmd_supported(void)
 	return 1;
 }
 
-int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
+void __pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
 {
 	pgprot_t sect_prot = __pgprot(PUD_TYPE_SECT |
 					pgprot_val(mk_sect_prot(prot)));
 
+	BUG_ON(phys & ~PUD_MASK);
+	set_pud(pudp, pfn_pud(__phys_to_pfn(phys), sect_prot));
+}
+
+int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
+{
 	/* ioremap_page_range doesn't honour BBM */
 	if (pud_present(READ_ONCE(*pudp)))
 		return 0;
 
-	BUG_ON(phys & ~PUD_MASK);
-	set_pud(pudp, pfn_pud(__phys_to_pfn(phys), sect_prot));
+	__pud_set_huge(pudp, phys, prot);
 	return 1;
 }
 
-int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
+static void __pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
 {
 	pgprot_t sect_prot = __pgprot(PMD_TYPE_SECT |
 					pgprot_val(mk_sect_prot(prot)));
+	BUG_ON(phys & ~PMD_MASK);
+	set_pmd(pmdp, pfn_pmd(__phys_to_pfn(phys), sect_prot));
+}
 
+int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
+{
 	/* ioremap_page_range doesn't honour BBM */
 	if (pmd_present(READ_ONCE(*pmdp)))
 		return 0;
 
-	BUG_ON(phys & ~PMD_MASK);
-	set_pmd(pmdp, pfn_pmd(__phys_to_pfn(phys), sect_prot));
+	__pmd_set_huge(pmdp, phys, prot);
 	return 1;
 }
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] arm64: Make sure permission updates happen for pmd/pud
  2018-05-22 23:50 [PATCH] arm64: Make sure permission updates happen for pmd/pud Laura Abbott
@ 2018-05-23 10:51 ` Will Deacon
  0 siblings, 0 replies; 2+ messages in thread
From: Will Deacon @ 2018-05-23 10:51 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Catalin Marinas, Ard Biesheuvel, linux-arm-kernel, linux-kernel,
	Kees Cook, Peter Robinson

Hi Laura,

On Tue, May 22, 2018 at 04:50:49PM -0700, Laura Abbott wrote:
> Commit 15122ee2c515 ("arm64: Enforce BBM for huge IO/VMAP mappings")
> disallowed block mappings for ioremap since that code does not honor
> break-before-make. The same APIs are also used for permission updating
> though and the extra checks prevent the permission updates from happening,
> even though this should be permitted. This results in read-only permissions
> not being fully applied. Visibly, this can occasionaly be seen as a failure
> on the built in rodata test when the test data ends up in a section or
> as an odd RW gap on the page table dump. Fix this by keeping the check
> for the top level p*d_set_huge APIs but using separate functions for the
> update APIs.
> 
> Reported-by: Peter Robinson <pbrobinson@gmail.com>
> Fixes: 15122ee2c515 ("arm64: Enforce BBM for huge IO/VMAP mappings")
> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
>  arch/arm64/mm/mmu.c | 28 ++++++++++++++++++++--------
>  1 file changed, 20 insertions(+), 8 deletions(-)

Thanks for sending the fix. One thing below...

> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2dbb2c9f1ec1..57517ad86910 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -66,6 +66,9 @@ static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
>  static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
>  static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
>  
> +static void __pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot);
> +static void __pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot);
> +
>  pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>  			      unsigned long size, pgprot_t vma_prot)
>  {
> @@ -200,7 +203,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
>  		/* try section mapping first */
>  		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
>  		    (flags & NO_BLOCK_MAPPINGS) == 0) {
> -			pmd_set_huge(pmdp, phys, prot);
> +			__pmd_set_huge(pmdp, phys, prot);

Given that there is ongoing work to fix the core ioremap code, it would
be nice to avoid adding '__' versions if we can help it. Would it work
if we replaced the pXd_present check with a call to pgattr_change_is_safe
instead?

Will

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-05-23 10:50 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-22 23:50 [PATCH] arm64: Make sure permission updates happen for pmd/pud Laura Abbott
2018-05-23 10:51 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).