linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
       [not found] <cover.1547153058.git.khalid.aziz@oracle.com>
@ 2019-01-10 21:09 ` Khalid Aziz
  2019-01-23 14:20   ` Konrad Rzeszutek Wilk
  2019-01-23 14:24   ` Konrad Rzeszutek Wilk
  2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 15+ messages in thread
From: Khalid Aziz @ 2019-01-10 21:09 UTC (permalink / raw)
  To: juergh, tycho, jsteckli, ak, torvalds, liran.alon, keescook, konrad.wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, Khalid Aziz,
	deepa.srinivasan, steven.sistare, joao.m.martins,
	boris.ostrovsky, hch, kanth.ghatraju, pradeep.vincent, jcm, tglx,
	chris.hyser, linux-arm-kernel, jmattson, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	kirill.shutemov

From: Juerg Haefliger <juerg.haefliger@canonical.com>

Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
provide a hook for updating a single kernel page table entry (which is
required by the generic XPFO code).

v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()

CC: linux-arm-kernel@lists.infradead.org
Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
Signed-off-by: Tycho Andersen <tycho@docker.com>
Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
---
 arch/arm64/Kconfig     |  1 +
 arch/arm64/mm/Makefile |  2 ++
 arch/arm64/mm/xpfo.c   | 58 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 61 insertions(+)
 create mode 100644 arch/arm64/mm/xpfo.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ea2ab0330e3a..f0a9c0007d23 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -171,6 +171,7 @@ config ARM64
 	select SWIOTLB
 	select SYSCTL_EXCEPTION_TRACE
 	select THREAD_INFO_IN_TASK
+	select ARCH_SUPPORTS_XPFO
 	help
 	  ARM 64-bit (AArch64) Linux support.
 
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 849c1df3d214..cca3808d9776 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o	+= n
 
 obj-$(CONFIG_KASAN)		+= kasan_init.o
 KASAN_SANITIZE_kasan_init.o	:= n
+
+obj-$(CONFIG_XPFO)		+= xpfo.o
diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
new file mode 100644
index 000000000000..678e2be848eb
--- /dev/null
+++ b/arch/arm64/mm/xpfo.c
@@ -0,0 +1,58 @@
+/*
+ * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
+ * Copyright (C) 2016 Brown University. All rights reserved.
+ *
+ * Authors:
+ *   Juerg Haefliger <juerg.haefliger@hpe.com>
+ *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/mm.h>
+#include <linux/module.h>
+
+#include <asm/tlbflush.h>
+
+/*
+ * Lookup the page table entry for a virtual address and return a pointer to
+ * the entry. Based on x86 tree.
+ */
+static pte_t *lookup_address(unsigned long addr)
+{
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_offset_k(addr);
+	if (pgd_none(*pgd))
+		return NULL;
+
+	pud = pud_offset(pgd, addr);
+	if (pud_none(*pud))
+		return NULL;
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return NULL;
+
+	return pte_offset_kernel(pmd, addr);
+}
+
+/* Update a single kernel page table entry */
+inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
+{
+	pte_t *pte = lookup_address((unsigned long)kaddr);
+
+	set_pte(pte, pfn_pte(page_to_pfn(page), prot));
+}
+
+inline void xpfo_flush_kernel_tlb(struct page *page, int order)
+{
+	unsigned long kaddr = (unsigned long)page_address(page);
+	unsigned long size = PAGE_SIZE;
+
+	flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
+}
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions
       [not found] <cover.1547153058.git.khalid.aziz@oracle.com>
  2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz
@ 2019-01-10 21:09 ` Khalid Aziz
  2019-01-11 14:54   ` Tycho Andersen
  2019-01-23 14:56   ` Konrad Rzeszutek Wilk
  2019-01-10 21:09 ` [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled Khalid Aziz
  2019-01-10 21:09 ` [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Khalid Aziz
  3 siblings, 2 replies; 15+ messages in thread
From: Khalid Aziz @ 2019-01-10 21:09 UTC (permalink / raw)
  To: juergh, tycho, jsteckli, ak, torvalds, liran.alon, keescook, konrad.wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, Khalid Aziz,
	deepa.srinivasan, steven.sistare, joao.m.martins,
	boris.ostrovsky, hch, kanth.ghatraju, pradeep.vincent, jcm, tglx,
	chris.hyser, linux-arm-kernel, jmattson, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	kirill.shutemov

From: Juerg Haefliger <juerg.haefliger@canonical.com>

If the page is unmapped by XPFO, a data cache flush results in a fatal
page fault, so let's temporarily map the region, flush the cache, and then
unmap it.

v6: actually flush in the face of xpfo, and temporarily map the underlying
    memory so it can be flushed correctly

CC: linux-arm-kernel@lists.infradead.org
Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
Signed-off-by: Tycho Andersen <tycho@docker.com>
Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
---
 arch/arm64/mm/flush.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 30695a868107..f12f26b60319 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -20,6 +20,7 @@
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/pagemap.h>
+#include <linux/xpfo.h>
 
 #include <asm/cacheflush.h>
 #include <asm/cache.h>
@@ -28,9 +29,15 @@
 void sync_icache_aliases(void *kaddr, unsigned long len)
 {
 	unsigned long addr = (unsigned long)kaddr;
+	unsigned long num_pages = XPFO_NUM_PAGES(addr, len);
+	void *mapping[num_pages];
 
 	if (icache_is_aliasing()) {
+		xpfo_temp_map(kaddr, len, mapping,
+			      sizeof(mapping[0]) * num_pages);
 		__clean_dcache_area_pou(kaddr, len);
+		xpfo_temp_unmap(kaddr, len, mapping,
+			        sizeof(mapping[0]) * num_pages);
 		__flush_icache_all();
 	} else {
 		flush_icache_range(addr, addr + len);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled
       [not found] <cover.1547153058.git.khalid.aziz@oracle.com>
  2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz
  2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz
@ 2019-01-10 21:09 ` Khalid Aziz
  2019-01-10 21:09 ` [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Khalid Aziz
  3 siblings, 0 replies; 15+ messages in thread
From: Khalid Aziz @ 2019-01-10 21:09 UTC (permalink / raw)
  To: juergh, tycho, jsteckli, ak, torvalds, liran.alon, keescook, konrad.wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, Khalid Aziz,
	deepa.srinivasan, steven.sistare, joao.m.martins,
	boris.ostrovsky, hch, kanth.ghatraju, pradeep.vincent, jcm, tglx,
	chris.hyser, linux-arm-kernel, jmattson, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, dwmw, kirill.shutemov

From: Tycho Andersen <tycho@docker.com>

XPFO doesn't support section/contiguous mappings yet, so let's disable it
if XPFO is turned on.

Thanks to Laura Abbot for the simplification from v5, and Mark Rutland for
pointing out we need NO_CONT_MAPPINGS too.

CC: linux-arm-kernel@lists.infradead.org
Signed-off-by: Tycho Andersen <tycho@docker.com>
Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
---
 arch/arm64/mm/mmu.c  | 2 +-
 include/linux/xpfo.h | 4 ++++
 mm/xpfo.c            | 6 ++++++
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index d1d6601b385d..f4dd27073006 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -451,7 +451,7 @@ static void __init map_mem(pgd_t *pgdp)
 	struct memblock_region *reg;
 	int flags = 0;
 
-	if (debug_pagealloc_enabled())
+	if (debug_pagealloc_enabled() || xpfo_enabled())
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h
index 2682a00ebbcb..0c26836a24e1 100644
--- a/include/linux/xpfo.h
+++ b/include/linux/xpfo.h
@@ -46,6 +46,8 @@ void xpfo_temp_map(const void *addr, size_t size, void **mapping,
 void xpfo_temp_unmap(const void *addr, size_t size, void **mapping,
 		     size_t mapping_len);
 
+bool xpfo_enabled(void);
+
 #else /* !CONFIG_XPFO */
 
 static inline void xpfo_kmap(void *kaddr, struct page *page) { }
@@ -68,6 +70,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size,
 }
 
 
+static inline bool xpfo_enabled(void) { return false; }
+
 #endif /* CONFIG_XPFO */
 
 #endif /* _LINUX_XPFO_H */
diff --git a/mm/xpfo.c b/mm/xpfo.c
index f79075bf7d65..25fba05d01bd 100644
--- a/mm/xpfo.c
+++ b/mm/xpfo.c
@@ -70,6 +70,12 @@ struct page_ext_operations page_xpfo_ops = {
 	.init = init_xpfo,
 };
 
+bool __init xpfo_enabled(void)
+{
+	return !xpfo_disabled;
+}
+EXPORT_SYMBOL(xpfo_enabled);
+
 static inline struct xpfo *lookup_xpfo(struct page *page)
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol
       [not found] <cover.1547153058.git.khalid.aziz@oracle.com>
                   ` (2 preceding siblings ...)
  2019-01-10 21:09 ` [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled Khalid Aziz
@ 2019-01-10 21:09 ` Khalid Aziz
  2019-01-23 15:03   ` Konrad Rzeszutek Wilk
  3 siblings, 1 reply; 15+ messages in thread
From: Khalid Aziz @ 2019-01-10 21:09 UTC (permalink / raw)
  To: juergh, tycho, jsteckli, ak, torvalds, liran.alon, keescook, konrad.wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, Khalid Aziz,
	deepa.srinivasan, steven.sistare, joao.m.martins,
	boris.ostrovsky, x86, hch, kanth.ghatraju, pradeep.vincent, jcm,
	tglx, chris.hyser, linux-arm-kernel, jmattson, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, dwmw, kirill.shutemov

From: Tycho Andersen <tycho@docker.com>

We need someting like this for testing XPFO. Since it's architecture
specific, putting it in the test code is slightly awkward, so let's make it
an arch-specific symbol and export it for use in LKDTM.

v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case

CC: linux-arm-kernel@lists.infradead.org
CC: x86@kernel.org
Signed-off-by: Tycho Andersen <tycho@docker.com>
Tested-by: Marco Benatto <marco.antonio.780@gmail.com>
Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
---
 arch/x86/mm/xpfo.c   | 57 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/xpfo.h |  8 +++++++
 2 files changed, 65 insertions(+)

diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c
index d1f04ea533cd..bcdb2f2089d2 100644
--- a/arch/x86/mm/xpfo.c
+++ b/arch/x86/mm/xpfo.c
@@ -112,3 +112,60 @@ inline void xpfo_flush_kernel_tlb(struct page *page, int order)
 
 	flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
 }
+
+/* Convert a user space virtual address to a physical address.
+ * Shamelessly copied from slow_virt_to_phys() and lookup_address() in
+ * arch/x86/mm/pageattr.c
+ */
+phys_addr_t user_virt_to_phys(unsigned long addr)
+{
+	phys_addr_t phys_addr;
+	unsigned long offset;
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	pgd = pgd_offset(current->mm, addr);
+	if (pgd_none(*pgd))
+		return 0;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d))
+		return 0;
+
+	if (p4d_large(*p4d) || !p4d_present(*p4d)) {
+		phys_addr = (unsigned long)p4d_pfn(*p4d) << PAGE_SHIFT;
+		offset = addr & ~P4D_MASK;
+		goto out;
+	}
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud))
+		return 0;
+
+	if (pud_large(*pud) || !pud_present(*pud)) {
+		phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT;
+		offset = addr & ~PUD_MASK;
+		goto out;
+	}
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return 0;
+
+	if (pmd_large(*pmd) || !pmd_present(*pmd)) {
+		phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT;
+		offset = addr & ~PMD_MASK;
+		goto out;
+	}
+
+	pte =  pte_offset_kernel(pmd, addr);
+	phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
+	offset = addr & ~PAGE_MASK;
+
+out:
+	return (phys_addr_t)(phys_addr | offset);
+}
+EXPORT_SYMBOL(user_virt_to_phys);
diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h
index 0c26836a24e1..d4b38ab8a633 100644
--- a/include/linux/xpfo.h
+++ b/include/linux/xpfo.h
@@ -23,6 +23,10 @@ struct page;
 
 #ifdef CONFIG_XPFO
 
+#include <linux/dma-mapping.h>
+
+#include <linux/types.h>
+
 extern struct page_ext_operations page_xpfo_ops;
 
 void set_kpte(void *kaddr, struct page *page, pgprot_t prot);
@@ -48,6 +52,8 @@ void xpfo_temp_unmap(const void *addr, size_t size, void **mapping,
 
 bool xpfo_enabled(void);
 
+phys_addr_t user_virt_to_phys(unsigned long addr);
+
 #else /* !CONFIG_XPFO */
 
 static inline void xpfo_kmap(void *kaddr, struct page *page) { }
@@ -72,6 +78,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size,
 
 static inline bool xpfo_enabled(void) { return false; }
 
+static inline phys_addr_t user_virt_to_phys(unsigned long addr) { return 0; }
+
 #endif /* CONFIG_XPFO */
 
 #endif /* _LINUX_XPFO_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions
  2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz
@ 2019-01-11 14:54   ` Tycho Andersen
  2019-01-11 18:28     ` Khalid Aziz
  2019-01-23 14:56   ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 15+ messages in thread
From: Tycho Andersen @ 2019-01-11 14:54 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: kernel-hardening, linux-mm, deepa.srinivasan, steven.sistare,
	joao.m.martins, boris.ostrovsky, ak, hch, kanth.ghatraju,
	jsteckli, pradeep.vincent, konrad.wilk, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

On Thu, Jan 10, 2019 at 02:09:39PM -0700, Khalid Aziz wrote:
> From: Juerg Haefliger <juerg.haefliger@canonical.com>
> 
> If the page is unmapped by XPFO, a data cache flush results in a fatal
> page fault, so let's temporarily map the region, flush the cache, and then
> unmap it.
> 
> v6: actually flush in the face of xpfo, and temporarily map the underlying
>     memory so it can be flushed correctly
> 
> CC: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
> Signed-off-by: Tycho Andersen <tycho@docker.com>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> ---
>  arch/arm64/mm/flush.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 30695a868107..f12f26b60319 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -20,6 +20,7 @@
>  #include <linux/export.h>
>  #include <linux/mm.h>
>  #include <linux/pagemap.h>
> +#include <linux/xpfo.h>
>  
>  #include <asm/cacheflush.h>
>  #include <asm/cache.h>
> @@ -28,9 +29,15 @@
>  void sync_icache_aliases(void *kaddr, unsigned long len)
>  {
>  	unsigned long addr = (unsigned long)kaddr;
> +	unsigned long num_pages = XPFO_NUM_PAGES(addr, len);
> +	void *mapping[num_pages];

Does this still compile with -Wvla? It was a bad hack on my part, and
we should probably just drop it and come up with something else :)

Tycho

>  	if (icache_is_aliasing()) {
> +		xpfo_temp_map(kaddr, len, mapping,
> +			      sizeof(mapping[0]) * num_pages);
>  		__clean_dcache_area_pou(kaddr, len);
> +		xpfo_temp_unmap(kaddr, len, mapping,
> +			        sizeof(mapping[0]) * num_pages);
>  		__flush_icache_all();
>  	} else {
>  		flush_icache_range(addr, addr + len);
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions
  2019-01-11 14:54   ` Tycho Andersen
@ 2019-01-11 18:28     ` Khalid Aziz
  2019-01-11 19:50       ` Tycho Andersen
  0 siblings, 1 reply; 15+ messages in thread
From: Khalid Aziz @ 2019-01-11 18:28 UTC (permalink / raw)
  To: Tycho Andersen
  Cc: kernel-hardening, linux-mm, deepa.srinivasan, steven.sistare,
	joao.m.martins, boris.ostrovsky, ak, hch, kanth.ghatraju,
	jsteckli, pradeep.vincent, konrad.wilk, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

[-- Attachment #1: Type: text/plain, Size: 2066 bytes --]

On 1/11/19 7:54 AM, Tycho Andersen wrote:
> On Thu, Jan 10, 2019 at 02:09:39PM -0700, Khalid Aziz wrote:
>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>
>> If the page is unmapped by XPFO, a data cache flush results in a fatal
>> page fault, so let's temporarily map the region, flush the cache, and then
>> unmap it.
>>
>> v6: actually flush in the face of xpfo, and temporarily map the underlying
>>     memory so it can be flushed correctly
>>
>> CC: linux-arm-kernel@lists.infradead.org
>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>> ---
>>  arch/arm64/mm/flush.c | 7 +++++++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
>> index 30695a868107..f12f26b60319 100644
>> --- a/arch/arm64/mm/flush.c
>> +++ b/arch/arm64/mm/flush.c
>> @@ -20,6 +20,7 @@
>>  #include <linux/export.h>
>>  #include <linux/mm.h>
>>  #include <linux/pagemap.h>
>> +#include <linux/xpfo.h>
>>  
>>  #include <asm/cacheflush.h>
>>  #include <asm/cache.h>
>> @@ -28,9 +29,15 @@
>>  void sync_icache_aliases(void *kaddr, unsigned long len)
>>  {
>>  	unsigned long addr = (unsigned long)kaddr;
>> +	unsigned long num_pages = XPFO_NUM_PAGES(addr, len);
>> +	void *mapping[num_pages];
> 
> Does this still compile with -Wvla? It was a bad hack on my part, and
> we should probably just drop it and come up with something else :)

I will make a note of it. I hope someone with better knowledge of arm64
than me can come up with a better solution ;)

--
Khalid

> 
> Tycho
> 
>>  	if (icache_is_aliasing()) {
>> +		xpfo_temp_map(kaddr, len, mapping,
>> +			      sizeof(mapping[0]) * num_pages);
>>  		__clean_dcache_area_pou(kaddr, len);
>> +		xpfo_temp_unmap(kaddr, len, mapping,
>> +			        sizeof(mapping[0]) * num_pages);
>>  		__flush_icache_all();
>>  	} else {
>>  		flush_icache_range(addr, addr + len);
>> -- 
>> 2.17.1
>>


[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2501 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions
  2019-01-11 18:28     ` Khalid Aziz
@ 2019-01-11 19:50       ` Tycho Andersen
  0 siblings, 0 replies; 15+ messages in thread
From: Tycho Andersen @ 2019-01-11 19:50 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: kernel-hardening, linux-mm, deepa.srinivasan, steven.sistare,
	joao.m.martins, boris.ostrovsky, ak, hch, kanth.ghatraju,
	jsteckli, pradeep.vincent, konrad.wilk, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

On Fri, Jan 11, 2019 at 11:28:19AM -0700, Khalid Aziz wrote:
> On 1/11/19 7:54 AM, Tycho Andersen wrote:
> > On Thu, Jan 10, 2019 at 02:09:39PM -0700, Khalid Aziz wrote:
> >> From: Juerg Haefliger <juerg.haefliger@canonical.com>
> >>
> >> If the page is unmapped by XPFO, a data cache flush results in a fatal
> >> page fault, so let's temporarily map the region, flush the cache, and then
> >> unmap it.
> >>
> >> v6: actually flush in the face of xpfo, and temporarily map the underlying
> >>     memory so it can be flushed correctly
> >>
> >> CC: linux-arm-kernel@lists.infradead.org
> >> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
> >> Signed-off-by: Tycho Andersen <tycho@docker.com>
> >> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> >> ---
> >>  arch/arm64/mm/flush.c | 7 +++++++
> >>  1 file changed, 7 insertions(+)
> >>
> >> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> >> index 30695a868107..f12f26b60319 100644
> >> --- a/arch/arm64/mm/flush.c
> >> +++ b/arch/arm64/mm/flush.c
> >> @@ -20,6 +20,7 @@
> >>  #include <linux/export.h>
> >>  #include <linux/mm.h>
> >>  #include <linux/pagemap.h>
> >> +#include <linux/xpfo.h>
> >>  
> >>  #include <asm/cacheflush.h>
> >>  #include <asm/cache.h>
> >> @@ -28,9 +29,15 @@
> >>  void sync_icache_aliases(void *kaddr, unsigned long len)
> >>  {
> >>  	unsigned long addr = (unsigned long)kaddr;
> >> +	unsigned long num_pages = XPFO_NUM_PAGES(addr, len);
> >> +	void *mapping[num_pages];
> > 
> > Does this still compile with -Wvla? It was a bad hack on my part, and
> > we should probably just drop it and come up with something else :)
> 
> I will make a note of it. I hope someone with better knowledge of arm64
> than me can come up with a better solution ;)

It's not just arm64, IIRC everywhere I used xpfo_temp_map() has a VLA.
I think this is in part because some of these paths don't allow
allocation failures, so we can't do a dynamic allocation. Perhaps we
need to reserve some memory for each call site?

Tycho

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
  2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz
@ 2019-01-23 14:20   ` Konrad Rzeszutek Wilk
  2019-02-12 15:45     ` Khalid Aziz
  2019-01-23 14:24   ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2019-01-23 14:20 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
> From: Juerg Haefliger <juerg.haefliger@canonical.com>
> 
> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
> provide a hook for updating a single kernel page table entry (which is
> required by the generic XPFO code).
> 
> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
> 
> CC: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
> Signed-off-by: Tycho Andersen <tycho@docker.com>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> ---
>  arch/arm64/Kconfig     |  1 +
>  arch/arm64/mm/Makefile |  2 ++
>  arch/arm64/mm/xpfo.c   | 58 ++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 61 insertions(+)
>  create mode 100644 arch/arm64/mm/xpfo.c
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index ea2ab0330e3a..f0a9c0007d23 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -171,6 +171,7 @@ config ARM64
>  	select SWIOTLB
>  	select SYSCTL_EXCEPTION_TRACE
>  	select THREAD_INFO_IN_TASK
> +	select ARCH_SUPPORTS_XPFO
>  	help
>  	  ARM 64-bit (AArch64) Linux support.
>  
> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
> index 849c1df3d214..cca3808d9776 100644
> --- a/arch/arm64/mm/Makefile
> +++ b/arch/arm64/mm/Makefile
> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o	+= n
>  
>  obj-$(CONFIG_KASAN)		+= kasan_init.o
>  KASAN_SANITIZE_kasan_init.o	:= n
> +
> +obj-$(CONFIG_XPFO)		+= xpfo.o
> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
> new file mode 100644
> index 000000000000..678e2be848eb
> --- /dev/null
> +++ b/arch/arm64/mm/xpfo.c
> @@ -0,0 +1,58 @@
> +/*
> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
> + * Copyright (C) 2016 Brown University. All rights reserved.
> + *
> + * Authors:
> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +
> +#include <asm/tlbflush.h>
> +
> +/*
> + * Lookup the page table entry for a virtual address and return a pointer to
> + * the entry. Based on x86 tree.
> + */
> +static pte_t *lookup_address(unsigned long addr)

The x86 also has level. Would it make sense to include that in here?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
  2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz
  2019-01-23 14:20   ` Konrad Rzeszutek Wilk
@ 2019-01-23 14:24   ` Konrad Rzeszutek Wilk
  2019-02-12 15:52     ` Khalid Aziz
  1 sibling, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2019-01-23 14:24 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
> From: Juerg Haefliger <juerg.haefliger@canonical.com>
> 
> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
> provide a hook for updating a single kernel page table entry (which is
> required by the generic XPFO code).
> 
> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
> 
> CC: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
> Signed-off-by: Tycho Andersen <tycho@docker.com>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> ---
>  arch/arm64/Kconfig     |  1 +
>  arch/arm64/mm/Makefile |  2 ++
>  arch/arm64/mm/xpfo.c   | 58 ++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 61 insertions(+)
>  create mode 100644 arch/arm64/mm/xpfo.c
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index ea2ab0330e3a..f0a9c0007d23 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -171,6 +171,7 @@ config ARM64
>  	select SWIOTLB
>  	select SYSCTL_EXCEPTION_TRACE
>  	select THREAD_INFO_IN_TASK
> +	select ARCH_SUPPORTS_XPFO
>  	help
>  	  ARM 64-bit (AArch64) Linux support.
>  
> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
> index 849c1df3d214..cca3808d9776 100644
> --- a/arch/arm64/mm/Makefile
> +++ b/arch/arm64/mm/Makefile
> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o	+= n
>  
>  obj-$(CONFIG_KASAN)		+= kasan_init.o
>  KASAN_SANITIZE_kasan_init.o	:= n
> +
> +obj-$(CONFIG_XPFO)		+= xpfo.o
> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
> new file mode 100644
> index 000000000000..678e2be848eb
> --- /dev/null
> +++ b/arch/arm64/mm/xpfo.c
> @@ -0,0 +1,58 @@
> +/*
> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
> + * Copyright (C) 2016 Brown University. All rights reserved.
> + *
> + * Authors:
> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +
> +#include <asm/tlbflush.h>
> +
> +/*
> + * Lookup the page table entry for a virtual address and return a pointer to
> + * the entry. Based on x86 tree.
> + */
> +static pte_t *lookup_address(unsigned long addr)
> +{
> +	pgd_t *pgd;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +
> +	pgd = pgd_offset_k(addr);
> +	if (pgd_none(*pgd))
> +		return NULL;
> +
> +	pud = pud_offset(pgd, addr);
> +	if (pud_none(*pud))
> +		return NULL;
> +
> +	pmd = pmd_offset(pud, addr);
> +	if (pmd_none(*pmd))
> +		return NULL;
> +
> +	return pte_offset_kernel(pmd, addr);
> +}
> +
> +/* Update a single kernel page table entry */
> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
> +{
> +	pte_t *pte = lookup_address((unsigned long)kaddr);
> +
> +	set_pte(pte, pfn_pte(page_to_pfn(page), prot));

Thought on the other hand.. what if the page is PMD? Do you really want
to do this?

What if 'pte' is NULL?
> +}
> +
> +inline void xpfo_flush_kernel_tlb(struct page *page, int order)
> +{
> +	unsigned long kaddr = (unsigned long)page_address(page);
> +	unsigned long size = PAGE_SIZE;
> +
> +	flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);

Ditto here. You are assuming it is PTE, but it may be PMD or such.
Or worts - the lookup_address could be NULL.

> +}
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions
  2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz
  2019-01-11 14:54   ` Tycho Andersen
@ 2019-01-23 14:56   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2019-01-23 14:56 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

On Thu, Jan 10, 2019 at 02:09:39PM -0700, Khalid Aziz wrote:
> From: Juerg Haefliger <juerg.haefliger@canonical.com>
> 
> If the page is unmapped by XPFO, a data cache flush results in a fatal
> page fault, so let's temporarily map the region, flush the cache, and then
> unmap it.
> 
> v6: actually flush in the face of xpfo, and temporarily map the underlying
>     memory so it can be flushed correctly
> 
> CC: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
> Signed-off-by: Tycho Andersen <tycho@docker.com>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> ---
>  arch/arm64/mm/flush.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 30695a868107..f12f26b60319 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -20,6 +20,7 @@
>  #include <linux/export.h>
>  #include <linux/mm.h>
>  #include <linux/pagemap.h>
> +#include <linux/xpfo.h>
>  
>  #include <asm/cacheflush.h>
>  #include <asm/cache.h>
> @@ -28,9 +29,15 @@
>  void sync_icache_aliases(void *kaddr, unsigned long len)
>  {
>  	unsigned long addr = (unsigned long)kaddr;
> +	unsigned long num_pages = XPFO_NUM_PAGES(addr, len);

Is it possible that the 'len' is more than 32 pages? Or say 1000's
of pages? Which would blow away your stack.

> +	void *mapping[num_pages];
>  
>  	if (icache_is_aliasing()) {
> +		xpfo_temp_map(kaddr, len, mapping,
> +			      sizeof(mapping[0]) * num_pages);
>  		__clean_dcache_area_pou(kaddr, len);
> +		xpfo_temp_unmap(kaddr, len, mapping,
> +			        sizeof(mapping[0]) * num_pages);
>  		__flush_icache_all();
>  	} else {
>  		flush_icache_range(addr, addr + len);
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol
  2019-01-10 21:09 ` [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Khalid Aziz
@ 2019-01-23 15:03   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2019-01-23 15:03 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, x86,
	hch, kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon,
	tglx, chris.hyser, linux-arm-kernel, jmattson, juergh,
	andrew.cooper3, linux-kernel, tyhicks, john.haxby, dwmw,
	keescook, torvalds, kirill.shutemov

> +EXPORT_SYMBOL(user_virt_to_phys);

Could it be _GPL? OTherwise looks OK to me.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
  2019-01-23 14:20   ` Konrad Rzeszutek Wilk
@ 2019-02-12 15:45     ` Khalid Aziz
  0 siblings, 0 replies; 15+ messages in thread
From: Khalid Aziz @ 2019-02-12 15:45 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

[-- Attachment #1: Type: text/plain, Size: 3125 bytes --]

On 1/23/19 7:20 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>
>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
>> provide a hook for updating a single kernel page table entry (which is
>> required by the generic XPFO code).
>>
>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
>>
>> CC: linux-arm-kernel@lists.infradead.org
>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>> ---
>>  arch/arm64/Kconfig     |  1 +
>>  arch/arm64/mm/Makefile |  2 ++
>>  arch/arm64/mm/xpfo.c   | 58 ++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 61 insertions(+)
>>  create mode 100644 arch/arm64/mm/xpfo.c
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index ea2ab0330e3a..f0a9c0007d23 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -171,6 +171,7 @@ config ARM64
>>  	select SWIOTLB
>>  	select SYSCTL_EXCEPTION_TRACE
>>  	select THREAD_INFO_IN_TASK
>> +	select ARCH_SUPPORTS_XPFO
>>  	help
>>  	  ARM 64-bit (AArch64) Linux support.
>>  
>> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
>> index 849c1df3d214..cca3808d9776 100644
>> --- a/arch/arm64/mm/Makefile
>> +++ b/arch/arm64/mm/Makefile
>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o	+= n
>>  
>>  obj-$(CONFIG_KASAN)		+= kasan_init.o
>>  KASAN_SANITIZE_kasan_init.o	:= n
>> +
>> +obj-$(CONFIG_XPFO)		+= xpfo.o
>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
>> new file mode 100644
>> index 000000000000..678e2be848eb
>> --- /dev/null
>> +++ b/arch/arm64/mm/xpfo.c
>> @@ -0,0 +1,58 @@
>> +/*
>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
>> + * Copyright (C) 2016 Brown University. All rights reserved.
>> + *
>> + * Authors:
>> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
>> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + */
>> +
>> +#include <linux/mm.h>
>> +#include <linux/module.h>
>> +
>> +#include <asm/tlbflush.h>
>> +
>> +/*
>> + * Lookup the page table entry for a virtual address and return a pointer to
>> + * the entry. Based on x86 tree.
>> + */
>> +static pte_t *lookup_address(unsigned long addr)
> 
> The x86 also has level. Would it make sense to include that in here?
> 

Possibly. ARM64 does not define page levels (as in the enum for page
levels) at this time but that can be added easily. Adding level to
lookup_address() for arm will make it uniform with x86 but is there any
other rationale besides that? Do you see a future use for this
information? The only other architecture I could see that defines
lookup_address() is sh but it uses it for trapped io only.

Thanks,
Khalid

[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2501 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
  2019-01-23 14:24   ` Konrad Rzeszutek Wilk
@ 2019-02-12 15:52     ` Khalid Aziz
  2019-02-12 20:01       ` Laura Abbott
  0 siblings, 1 reply; 15+ messages in thread
From: Khalid Aziz @ 2019-02-12 15:52 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

[-- Attachment #1: Type: text/plain, Size: 4084 bytes --]

On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>
>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
>> provide a hook for updating a single kernel page table entry (which is
>> required by the generic XPFO code).
>>
>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
>>
>> CC: linux-arm-kernel@lists.infradead.org
>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>> ---
>>  arch/arm64/Kconfig     |  1 +
>>  arch/arm64/mm/Makefile |  2 ++
>>  arch/arm64/mm/xpfo.c   | 58 ++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 61 insertions(+)
>>  create mode 100644 arch/arm64/mm/xpfo.c
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index ea2ab0330e3a..f0a9c0007d23 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -171,6 +171,7 @@ config ARM64
>>  	select SWIOTLB
>>  	select SYSCTL_EXCEPTION_TRACE
>>  	select THREAD_INFO_IN_TASK
>> +	select ARCH_SUPPORTS_XPFO
>>  	help
>>  	  ARM 64-bit (AArch64) Linux support.
>>  
>> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
>> index 849c1df3d214..cca3808d9776 100644
>> --- a/arch/arm64/mm/Makefile
>> +++ b/arch/arm64/mm/Makefile
>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o	+= n
>>  
>>  obj-$(CONFIG_KASAN)		+= kasan_init.o
>>  KASAN_SANITIZE_kasan_init.o	:= n
>> +
>> +obj-$(CONFIG_XPFO)		+= xpfo.o
>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
>> new file mode 100644
>> index 000000000000..678e2be848eb
>> --- /dev/null
>> +++ b/arch/arm64/mm/xpfo.c
>> @@ -0,0 +1,58 @@
>> +/*
>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
>> + * Copyright (C) 2016 Brown University. All rights reserved.
>> + *
>> + * Authors:
>> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
>> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + */
>> +
>> +#include <linux/mm.h>
>> +#include <linux/module.h>
>> +
>> +#include <asm/tlbflush.h>
>> +
>> +/*
>> + * Lookup the page table entry for a virtual address and return a pointer to
>> + * the entry. Based on x86 tree.
>> + */
>> +static pte_t *lookup_address(unsigned long addr)
>> +{
>> +	pgd_t *pgd;
>> +	pud_t *pud;
>> +	pmd_t *pmd;
>> +
>> +	pgd = pgd_offset_k(addr);
>> +	if (pgd_none(*pgd))
>> +		return NULL;
>> +
>> +	pud = pud_offset(pgd, addr);
>> +	if (pud_none(*pud))
>> +		return NULL;
>> +
>> +	pmd = pmd_offset(pud, addr);
>> +	if (pmd_none(*pmd))
>> +		return NULL;
>> +
>> +	return pte_offset_kernel(pmd, addr);
>> +}
>> +
>> +/* Update a single kernel page table entry */
>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
>> +{
>> +	pte_t *pte = lookup_address((unsigned long)kaddr);
>> +
>> +	set_pte(pte, pfn_pte(page_to_pfn(page), prot));
> 
> Thought on the other hand.. what if the page is PMD? Do you really want
> to do this?
> 
> What if 'pte' is NULL?
>> +}
>> +
>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order)
>> +{
>> +	unsigned long kaddr = (unsigned long)page_address(page);
>> +	unsigned long size = PAGE_SIZE;
>> +
>> +	flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
> 
> Ditto here. You are assuming it is PTE, but it may be PMD or such.
> Or worts - the lookup_address could be NULL.
> 
>> +}
>> -- 
>> 2.17.1
>>

Hi Konrad,

This makes sense. x86 version of set_kpte() checks pte for NULL and also
checks if the page is PMD. Now what you said about adding level to
lookup_address() for arm makes more sense.

Can someone with knowledge of arm64 mmu make recommendations here?

Thanks,
Khalid

[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2501 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
  2019-02-12 15:52     ` Khalid Aziz
@ 2019-02-12 20:01       ` Laura Abbott
  2019-02-12 20:34         ` Khalid Aziz
  0 siblings, 1 reply; 15+ messages in thread
From: Laura Abbott @ 2019-02-12 20:01 UTC (permalink / raw)
  To: Khalid Aziz, Konrad Rzeszutek Wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

On 2/12/19 7:52 AM, Khalid Aziz wrote:
> On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
>>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>
>>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
>>> provide a hook for updating a single kernel page table entry (which is
>>> required by the generic XPFO code).
>>>
>>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
>>>
>>> CC: linux-arm-kernel@lists.infradead.org
>>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>> ---
>>>   arch/arm64/Kconfig     |  1 +
>>>   arch/arm64/mm/Makefile |  2 ++
>>>   arch/arm64/mm/xpfo.c   | 58 ++++++++++++++++++++++++++++++++++++++++++
>>>   3 files changed, 61 insertions(+)
>>>   create mode 100644 arch/arm64/mm/xpfo.c
>>>
>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>> index ea2ab0330e3a..f0a9c0007d23 100644
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -171,6 +171,7 @@ config ARM64
>>>   	select SWIOTLB
>>>   	select SYSCTL_EXCEPTION_TRACE
>>>   	select THREAD_INFO_IN_TASK
>>> +	select ARCH_SUPPORTS_XPFO
>>>   	help
>>>   	  ARM 64-bit (AArch64) Linux support.
>>>   
>>> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
>>> index 849c1df3d214..cca3808d9776 100644
>>> --- a/arch/arm64/mm/Makefile
>>> +++ b/arch/arm64/mm/Makefile
>>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o	+= n
>>>   
>>>   obj-$(CONFIG_KASAN)		+= kasan_init.o
>>>   KASAN_SANITIZE_kasan_init.o	:= n
>>> +
>>> +obj-$(CONFIG_XPFO)		+= xpfo.o
>>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
>>> new file mode 100644
>>> index 000000000000..678e2be848eb
>>> --- /dev/null
>>> +++ b/arch/arm64/mm/xpfo.c
>>> @@ -0,0 +1,58 @@
>>> +/*
>>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
>>> + * Copyright (C) 2016 Brown University. All rights reserved.
>>> + *
>>> + * Authors:
>>> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
>>> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify it
>>> + * under the terms of the GNU General Public License version 2 as published by
>>> + * the Free Software Foundation.
>>> + */
>>> +
>>> +#include <linux/mm.h>
>>> +#include <linux/module.h>
>>> +
>>> +#include <asm/tlbflush.h>
>>> +
>>> +/*
>>> + * Lookup the page table entry for a virtual address and return a pointer to
>>> + * the entry. Based on x86 tree.
>>> + */
>>> +static pte_t *lookup_address(unsigned long addr)
>>> +{
>>> +	pgd_t *pgd;
>>> +	pud_t *pud;
>>> +	pmd_t *pmd;
>>> +
>>> +	pgd = pgd_offset_k(addr);
>>> +	if (pgd_none(*pgd))
>>> +		return NULL;
>>> +
>>> +	pud = pud_offset(pgd, addr);
>>> +	if (pud_none(*pud))
>>> +		return NULL;
>>> +
>>> +	pmd = pmd_offset(pud, addr);
>>> +	if (pmd_none(*pmd))
>>> +		return NULL;
>>> +
>>> +	return pte_offset_kernel(pmd, addr);
>>> +}
>>> +
>>> +/* Update a single kernel page table entry */
>>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
>>> +{
>>> +	pte_t *pte = lookup_address((unsigned long)kaddr);
>>> +
>>> +	set_pte(pte, pfn_pte(page_to_pfn(page), prot));
>>
>> Thought on the other hand.. what if the page is PMD? Do you really want
>> to do this?
>>
>> What if 'pte' is NULL?
>>> +}
>>> +
>>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order)
>>> +{
>>> +	unsigned long kaddr = (unsigned long)page_address(page);
>>> +	unsigned long size = PAGE_SIZE;
>>> +
>>> +	flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
>>
>> Ditto here. You are assuming it is PTE, but it may be PMD or such.
>> Or worts - the lookup_address could be NULL.
>>
>>> +}
>>> -- 
>>> 2.17.1
>>>
> 
> Hi Konrad,
> 
> This makes sense. x86 version of set_kpte() checks pte for NULL and also
> checks if the page is PMD. Now what you said about adding level to
> lookup_address() for arm makes more sense.
> 
> Can someone with knowledge of arm64 mmu make recommendations here?
> 
> Thanks,
> Khalid
> 

arm64 can't split larger pages and requires everything must be
mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable
section/contiguous mappings if XPFO is enabled) . Any
situation where we would get something other than a pte
would be a bug.

Thanks,
Laura

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
  2019-02-12 20:01       ` Laura Abbott
@ 2019-02-12 20:34         ` Khalid Aziz
  0 siblings, 0 replies; 15+ messages in thread
From: Khalid Aziz @ 2019-02-12 20:34 UTC (permalink / raw)
  To: Laura Abbott, Konrad Rzeszutek Wilk
  Cc: Tycho Andersen, kernel-hardening, linux-mm, deepa.srinivasan,
	steven.sistare, joao.m.martins, boris.ostrovsky, tycho, ak, hch,
	kanth.ghatraju, jsteckli, pradeep.vincent, jcm, liran.alon, tglx,
	chris.hyser, linux-arm-kernel, jmattson, juergh, andrew.cooper3,
	linux-kernel, tyhicks, john.haxby, Juerg Haefliger, dwmw,
	keescook, torvalds, kirill.shutemov

[-- Attachment #1: Type: text/plain, Size: 5505 bytes --]

On 2/12/19 1:01 PM, Laura Abbott wrote:
> On 2/12/19 7:52 AM, Khalid Aziz wrote:
>> On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote:
>>> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
>>>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>>
>>>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
>>>> provide a hook for updating a single kernel page table entry (which is
>>>> required by the generic XPFO code).
>>>>
>>>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
>>>>
>>>> CC: linux-arm-kernel@lists.infradead.org
>>>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>>> ---
>>>>   arch/arm64/Kconfig     |  1 +
>>>>   arch/arm64/mm/Makefile |  2 ++
>>>>   arch/arm64/mm/xpfo.c   | 58
>>>> ++++++++++++++++++++++++++++++++++++++++++
>>>>   3 files changed, 61 insertions(+)
>>>>   create mode 100644 arch/arm64/mm/xpfo.c
>>>>
>>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>>> index ea2ab0330e3a..f0a9c0007d23 100644
>>>> --- a/arch/arm64/Kconfig
>>>> +++ b/arch/arm64/Kconfig
>>>> @@ -171,6 +171,7 @@ config ARM64
>>>>       select SWIOTLB
>>>>       select SYSCTL_EXCEPTION_TRACE
>>>>       select THREAD_INFO_IN_TASK
>>>> +    select ARCH_SUPPORTS_XPFO
>>>>       help
>>>>         ARM 64-bit (AArch64) Linux support.
>>>>   diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
>>>> index 849c1df3d214..cca3808d9776 100644
>>>> --- a/arch/arm64/mm/Makefile
>>>> +++ b/arch/arm64/mm/Makefile
>>>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o    += n
>>>>     obj-$(CONFIG_KASAN)        += kasan_init.o
>>>>   KASAN_SANITIZE_kasan_init.o    := n
>>>> +
>>>> +obj-$(CONFIG_XPFO)        += xpfo.o
>>>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
>>>> new file mode 100644
>>>> index 000000000000..678e2be848eb
>>>> --- /dev/null
>>>> +++ b/arch/arm64/mm/xpfo.c
>>>> @@ -0,0 +1,58 @@
>>>> +/*
>>>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
>>>> + * Copyright (C) 2016 Brown University. All rights reserved.
>>>> + *
>>>> + * Authors:
>>>> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
>>>> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or
>>>> modify it
>>>> + * under the terms of the GNU General Public License version 2 as
>>>> published by
>>>> + * the Free Software Foundation.
>>>> + */
>>>> +
>>>> +#include <linux/mm.h>
>>>> +#include <linux/module.h>
>>>> +
>>>> +#include <asm/tlbflush.h>
>>>> +
>>>> +/*
>>>> + * Lookup the page table entry for a virtual address and return a
>>>> pointer to
>>>> + * the entry. Based on x86 tree.
>>>> + */
>>>> +static pte_t *lookup_address(unsigned long addr)
>>>> +{
>>>> +    pgd_t *pgd;
>>>> +    pud_t *pud;
>>>> +    pmd_t *pmd;
>>>> +
>>>> +    pgd = pgd_offset_k(addr);
>>>> +    if (pgd_none(*pgd))
>>>> +        return NULL;
>>>> +
>>>> +    pud = pud_offset(pgd, addr);
>>>> +    if (pud_none(*pud))
>>>> +        return NULL;
>>>> +
>>>> +    pmd = pmd_offset(pud, addr);
>>>> +    if (pmd_none(*pmd))
>>>> +        return NULL;
>>>> +
>>>> +    return pte_offset_kernel(pmd, addr);
>>>> +}
>>>> +
>>>> +/* Update a single kernel page table entry */
>>>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
>>>> +{
>>>> +    pte_t *pte = lookup_address((unsigned long)kaddr);
>>>> +
>>>> +    set_pte(pte, pfn_pte(page_to_pfn(page), prot));
>>>
>>> Thought on the other hand.. what if the page is PMD? Do you really want
>>> to do this?
>>>
>>> What if 'pte' is NULL?
>>>> +}
>>>> +
>>>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order)
>>>> +{
>>>> +    unsigned long kaddr = (unsigned long)page_address(page);
>>>> +    unsigned long size = PAGE_SIZE;
>>>> +
>>>> +    flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
>>>
>>> Ditto here. You are assuming it is PTE, but it may be PMD or such.
>>> Or worts - the lookup_address could be NULL.
>>>
>>>> +}
>>>> -- 
>>>> 2.17.1
>>>>
>>
>> Hi Konrad,
>>
>> This makes sense. x86 version of set_kpte() checks pte for NULL and also
>> checks if the page is PMD. Now what you said about adding level to
>> lookup_address() for arm makes more sense.
>>
>> Can someone with knowledge of arm64 mmu make recommendations here?
>>
>> Thanks,
>> Khalid
>>
> 
> arm64 can't split larger pages and requires everything must be
> mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable
> section/contiguous mappings if XPFO is enabled) . Any
> situation where we would get something other than a pte
> would be a bug.

Thanks, Laura! That helps a lot. I would think checking for NULL pte in
set_kpte() would still make sense since lookup_address() can return
NULL. Something like:

--- arch/arm64/mm/xpfo.c	2019-01-30 13:36:39.857185612 -0700
+++ arch/arm64/mm/xpfo.c.new	2019-02-12 13:26:47.471633031 -0700
@@ -46,6 +46,11 @@
 {
 	pte_t *pte = lookup_address((unsigned long)kaddr);

+	if (unlikely(!pte)) {
+		WARN(1, "xpfo: invalid address %p\n", kaddr);
+		return;
+	}
+
 	set_pte(pte, pfn_pte(page_to_pfn(page), prot));
 }

--
Khalid

[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2501 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-02-12 20:34 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1547153058.git.khalid.aziz@oracle.com>
2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz
2019-01-23 14:20   ` Konrad Rzeszutek Wilk
2019-02-12 15:45     ` Khalid Aziz
2019-01-23 14:24   ` Konrad Rzeszutek Wilk
2019-02-12 15:52     ` Khalid Aziz
2019-02-12 20:01       ` Laura Abbott
2019-02-12 20:34         ` Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz
2019-01-11 14:54   ` Tycho Andersen
2019-01-11 18:28     ` Khalid Aziz
2019-01-11 19:50       ` Tycho Andersen
2019-01-23 14:56   ` Konrad Rzeszutek Wilk
2019-01-10 21:09 ` [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Khalid Aziz
2019-01-23 15:03   ` Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).