* [PATCH 0/2] Support for set_memory_* outside of module space @ 2015-11-03 21:48 Laura Abbott 2015-11-03 21:48 ` [PATCH 1/2] arm64: Get existing page protections in split_pmd Laura Abbott ` (2 more replies) 0 siblings, 3 replies; 15+ messages in thread From: Laura Abbott @ 2015-11-03 21:48 UTC (permalink / raw) To: Catalin Marinas, Will Deacon Cc: Laura Abbott, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Ard Biesheuvel, Mark Rutland Hi, Based on a recent discussion[1] there is interest in having set_memory_* work on kernel memory for security and other use cases. This patch adds the ability for that to happen behind a kernel option. If this is welcome enough, the Kconfig can be dropped. This has been briefly tested but not stress tested. Thanks, Laura [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-October/382079.html Laura Abbott (2): arm64: Get existing page protections in split_pmd arm64: Allow changing of attributes outside of modules arch/arm64/Kconfig.debug | 11 +++++++ arch/arm64/mm/mm.h | 3 ++ arch/arm64/mm/mmu.c | 11 ++++--- arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- 4 files changed, 88 insertions(+), 11 deletions(-) -- 2.4.3 ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/2] arm64: Get existing page protections in split_pmd 2015-11-03 21:48 [PATCH 0/2] Support for set_memory_* outside of module space Laura Abbott @ 2015-11-03 21:48 ` Laura Abbott 2015-11-05 7:07 ` Ard Biesheuvel 2015-11-05 10:15 ` Xishi Qiu 2015-11-03 21:48 ` [PATCH 2/2] arm64: Allow changing of attributes outside of modules Laura Abbott 2015-11-03 23:42 ` [PATCH 0/2] Support for set_memory_* outside of module space Kees Cook 2 siblings, 2 replies; 15+ messages in thread From: Laura Abbott @ 2015-11-03 21:48 UTC (permalink / raw) To: Catalin Marinas, Will Deacon Cc: Laura Abbott, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Ard Biesheuvel, Mark Rutland Rather than always putting the least restrictived permissions (PAGE_KERNEL_EXEC) when spliting a pmd into pages, use the existing permissions from the pmd for the page. Signed-off-by: Laura Abbott <labbott@fedoraproject.org> --- arch/arm64/mm/mmu.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 9211b85..ff41efa 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -75,14 +75,13 @@ static void __init *early_alloc(unsigned long sz) static void split_pmd(pmd_t *pmd, pte_t *pte) { unsigned long pfn = pmd_pfn(*pmd); + unsigned long addr = pfn << PAGE_SHIFT; + pgprot_t prot = __pgprot(pmd_val(*pmd) ^ addr) | PTE_TYPE_PAGE; + int i = 0; do { - /* - * Need to have the least restrictive permissions available - * permissions will be fixed up later - */ - set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC)); + set_pte(pte, pfn_pte(pfn, prot)); pfn++; } while (pte++, i++, i < PTRS_PER_PTE); } -- 2.4.3 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] arm64: Get existing page protections in split_pmd 2015-11-03 21:48 ` [PATCH 1/2] arm64: Get existing page protections in split_pmd Laura Abbott @ 2015-11-05 7:07 ` Ard Biesheuvel 2015-11-05 10:15 ` Xishi Qiu 1 sibling, 0 replies; 15+ messages in thread From: Ard Biesheuvel @ 2015-11-05 7:07 UTC (permalink / raw) To: Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Mark Rutland Hi Laura, On 3 November 2015 at 22:48, Laura Abbott <labbott@fedoraproject.org> wrote: > > Rather than always putting the least restrictived permissions restrictive > (PAGE_KERNEL_EXEC) when spliting a pmd into pages, use > the existing permissions from the pmd for the page. > > Signed-off-by: Laura Abbott <labbott@fedoraproject.org> > --- > arch/arm64/mm/mmu.c | 9 ++++----- > 1 file changed, 4 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 9211b85..ff41efa 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -75,14 +75,13 @@ static void __init *early_alloc(unsigned long sz) > static void split_pmd(pmd_t *pmd, pte_t *pte) > { > unsigned long pfn = pmd_pfn(*pmd); > + unsigned long addr = pfn << PAGE_SHIFT; > + pgprot_t prot = __pgprot(pmd_val(*pmd) ^ addr) | PTE_TYPE_PAGE; pgprot_t is a struct type when STRICT_MM_TYPECHECKS is in effect, so __pgprot() should cover the entire expression. With the above fixed: Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Thanks, > + > int i = 0; > > do { > - /* > - * Need to have the least restrictive permissions available > - * permissions will be fixed up later > - */ > - set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC)); > + set_pte(pte, pfn_pte(pfn, prot)); > pfn++; > } while (pte++, i++, i < PTRS_PER_PTE); > } > -- > 2.4.3 > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] arm64: Get existing page protections in split_pmd 2015-11-03 21:48 ` [PATCH 1/2] arm64: Get existing page protections in split_pmd Laura Abbott 2015-11-05 7:07 ` Ard Biesheuvel @ 2015-11-05 10:15 ` Xishi Qiu 2015-11-06 1:24 ` Laura Abbott 1 sibling, 1 reply; 15+ messages in thread From: Xishi Qiu @ 2015-11-05 10:15 UTC (permalink / raw) To: Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Ard Biesheuvel, Mark Rutland On 2015/11/4 5:48, Laura Abbott wrote: > > Rather than always putting the least restrictived permissions > (PAGE_KERNEL_EXEC) when spliting a pmd into pages, use > the existing permissions from the pmd for the page. > > Signed-off-by: Laura Abbott <labbott@fedoraproject.org> > --- > arch/arm64/mm/mmu.c | 9 ++++----- > 1 file changed, 4 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 9211b85..ff41efa 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -75,14 +75,13 @@ static void __init *early_alloc(unsigned long sz) > static void split_pmd(pmd_t *pmd, pte_t *pte) > { > unsigned long pfn = pmd_pfn(*pmd); > + unsigned long addr = pfn << PAGE_SHIFT; > + pgprot_t prot = __pgprot(pmd_val(*pmd) ^ addr) | PTE_TYPE_PAGE; > + Hi Laura, I'm not quite understand, I find split_pud() doesn't set the flag PMD_TYPE_TABLE. If we clear xx_TABLE_BIT, does that means the page is large page? And what is the different from the flag xx_TYPE_SECT? Thanks, Xishi Qiu > int i = 0; > > do { > - /* > - * Need to have the least restrictive permissions available > - * permissions will be fixed up later > - */ > - set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC)); > + set_pte(pte, pfn_pte(pfn, prot)); > pfn++; > } while (pte++, i++, i < PTRS_PER_PTE); > } ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] arm64: Get existing page protections in split_pmd 2015-11-05 10:15 ` Xishi Qiu @ 2015-11-06 1:24 ` Laura Abbott 0 siblings, 0 replies; 15+ messages in thread From: Laura Abbott @ 2015-11-06 1:24 UTC (permalink / raw) To: Xishi Qiu, Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Ard Biesheuvel, Mark Rutland On 11/05/2015 02:15 AM, Xishi Qiu wrote: > On 2015/11/4 5:48, Laura Abbott wrote: > >> >> Rather than always putting the least restrictived permissions >> (PAGE_KERNEL_EXEC) when spliting a pmd into pages, use >> the existing permissions from the pmd for the page. >> >> Signed-off-by: Laura Abbott <labbott@fedoraproject.org> >> --- >> arch/arm64/mm/mmu.c | 9 ++++----- >> 1 file changed, 4 insertions(+), 5 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 9211b85..ff41efa 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -75,14 +75,13 @@ static void __init *early_alloc(unsigned long sz) >> static void split_pmd(pmd_t *pmd, pte_t *pte) >> { >> unsigned long pfn = pmd_pfn(*pmd); >> + unsigned long addr = pfn << PAGE_SHIFT; >> + pgprot_t prot = __pgprot(pmd_val(*pmd) ^ addr) | PTE_TYPE_PAGE; >> + > > Hi Laura, > > I'm not quite understand, I find split_pud() doesn't set the flag > PMD_TYPE_TABLE. If we clear xx_TABLE_BIT, does that means the page > is large page? I'm assuming by large page you mean a block mapping. Yes, without that entry the kernel treats this as a block mapping > And what is the different from the flag xx_TYPE_SECT? That would mark it this as a block mapping which is not what we want here. > > Thanks, > Xishi Qiu > ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] arm64: Allow changing of attributes outside of modules 2015-11-03 21:48 [PATCH 0/2] Support for set_memory_* outside of module space Laura Abbott 2015-11-03 21:48 ` [PATCH 1/2] arm64: Get existing page protections in split_pmd Laura Abbott @ 2015-11-03 21:48 ` Laura Abbott 2015-11-04 3:17 ` zhong jiang ` (3 more replies) 2015-11-03 23:42 ` [PATCH 0/2] Support for set_memory_* outside of module space Kees Cook 2 siblings, 4 replies; 15+ messages in thread From: Laura Abbott @ 2015-11-03 21:48 UTC (permalink / raw) To: Catalin Marinas, Will Deacon Cc: Laura Abbott, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Ard Biesheuvel, Mark Rutland Currently, the set_memory_* functions that are implemented for arm64 are restricted to module addresses only. This was mostly done because arm64 maps normal zone memory with larger page sizes to improve TLB performance. This has the side effect though of making it difficult to adjust attributes at the PAGE_SIZE granularity. There are an increasing number of use cases related to security where it is necessary to change the attributes of kernel memory. Add functionality to the page attribute changing code under a Kconfig to let systems designers decide if they want to make the trade off of security for TLB pressure. Signed-off-by: Laura Abbott <labbott@fedoraproject.org> --- arch/arm64/Kconfig.debug | 11 +++++++ arch/arm64/mm/mm.h | 3 ++ arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- 4 files changed, 84 insertions(+), 6 deletions(-) diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug index d6285ef..abc6922 100644 --- a/arch/arm64/Kconfig.debug +++ b/arch/arm64/Kconfig.debug @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA If in doubt, say N +config DEBUG_CHANGE_PAGEATTR + bool "Allow all kernel memory to have attributes changed" + help + If this option is selected, APIs that change page attributes + (RW <-> RO, X <-> NX) will be valid for all memory mapped in + the kernel space. The trade off is that there may be increased + TLB pressure from finer grained page mapping. Turn on this option + if performance is more important than security + + If in doubt, say N + source "drivers/hwtracing/coresight/Kconfig" endmenu diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h index ef47d99..7b0dcc4 100644 --- a/arch/arm64/mm/mm.h +++ b/arch/arm64/mm/mm.h @@ -1,3 +1,6 @@ extern void __init bootmem_init(void); void fixup_init(void); + +void split_pud(pud_t *old_pud, pmd_t *pmd); +void split_pmd(pmd_t *pmd, pte_t *pte); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ff41efa..cefad2d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz) /* * remap a PMD into pages */ -static void split_pmd(pmd_t *pmd, pte_t *pte) +void split_pmd(pmd_t *pmd, pte_t *pte) { unsigned long pfn = pmd_pfn(*pmd); unsigned long addr = pfn << PAGE_SHIFT; diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index e47ed1c..48a4ce9 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -15,9 +15,12 @@ #include <linux/module.h> #include <linux/sched.h> +#include <asm/pgalloc.h> #include <asm/pgtable.h> #include <asm/tlbflush.h> +#include "mm.h" + struct page_change_data { pgprot_t set_mask; pgprot_t clear_mask; @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, return 0; } +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR +static int check_address(unsigned long addr) +{ + pgd_t *pgd = pgd_offset_k(addr); + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret = -EFAULT; + + if (pgd_none(*pgd)) + goto out; + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) + goto out; + + if (pud_sect(*pud)) { + pmd = pmd_alloc_one(&init_mm, addr); + if (!pmd) { + ret = -ENOMEM; + goto out; + } + split_pud(pud, pmd); + pud_populate(&init_mm, pud, pmd); + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + goto out; + + if (pmd_sect(*pmd)) { + pte = pte_alloc_one_kernel(&init_mm, addr); + if (!pte) { + ret = -ENOMEM; + goto out; + } + split_pmd(pmd, pte); + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE); + } + + pte = pte_offset_kernel(pmd, addr); + if (pte_none(*pte)) + goto out; + + flush_tlb_all(); + ret = 0; + +out: + return ret; +} +#else +static int check_address(unsigned long addr) +{ + if (addr < MODULES_VADDR || addr >= MODULES_END) + return -EINVAL; + + return 0; +} +#endif + static int change_memory_common(unsigned long addr, int numpages, pgprot_t set_mask, pgprot_t clear_mask) { @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int numpages, int ret; struct page_change_data data; + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr)) + return -EINVAL; + if (!IS_ALIGNED(addr, PAGE_SIZE)) { start &= PAGE_MASK; end = start + size; WARN_ON_ONCE(1); } - if (start < MODULES_VADDR || start >= MODULES_END) - return -EINVAL; - - if (end < MODULES_VADDR || end >= MODULES_END) - return -EINVAL; + ret = check_address(addr); + if (ret) + return ret; data.set_mask = set_mask; data.clear_mask = clear_mask; -- 2.4.3 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules 2015-11-03 21:48 ` [PATCH 2/2] arm64: Allow changing of attributes outside of modules Laura Abbott @ 2015-11-04 3:17 ` zhong jiang 2015-11-05 7:44 ` Ard Biesheuvel ` (2 subsequent siblings) 3 siblings, 0 replies; 15+ messages in thread From: zhong jiang @ 2015-11-04 3:17 UTC (permalink / raw) To: Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Ard Biesheuvel, Mark Rutland On 2015/11/4 5:48, Laura Abbott wrote: > > Currently, the set_memory_* functions that are implemented for arm64 > are restricted to module addresses only. This was mostly done > because arm64 maps normal zone memory with larger page sizes to > improve TLB performance. This has the side effect though of making it > difficult to adjust attributes at the PAGE_SIZE granularity. There are > an increasing number of use cases related to security where it is > necessary to change the attributes of kernel memory. Add functionality > to the page attribute changing code under a Kconfig to let systems > designers decide if they want to make the trade off of security for TLB > pressure. > > Signed-off-by: Laura Abbott <labbott@fedoraproject.org> > --- > arch/arm64/Kconfig.debug | 11 +++++++ > arch/arm64/mm/mm.h | 3 ++ > arch/arm64/mm/mmu.c | 2 +- > arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- > 4 files changed, 84 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug > index d6285ef..abc6922 100644 > --- a/arch/arm64/Kconfig.debug > +++ b/arch/arm64/Kconfig.debug > @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA > > If in doubt, say N > > +config DEBUG_CHANGE_PAGEATTR > + bool "Allow all kernel memory to have attributes changed" > + help > + If this option is selected, APIs that change page attributes > + (RW <-> RO, X <-> NX) will be valid for all memory mapped in > + the kernel space. The trade off is that there may be increased > + TLB pressure from finer grained page mapping. Turn on this option > + if performance is more important than security > + > + If in doubt, say N > + > source "drivers/hwtracing/coresight/Kconfig" > > endmenu > diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h > index ef47d99..7b0dcc4 100644 > --- a/arch/arm64/mm/mm.h > +++ b/arch/arm64/mm/mm.h > @@ -1,3 +1,6 @@ > extern void __init bootmem_init(void); > > void fixup_init(void); > + > +void split_pud(pud_t *old_pud, pmd_t *pmd); > +void split_pmd(pmd_t *pmd, pte_t *pte); > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index ff41efa..cefad2d 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz) > /* > * remap a PMD into pages > */ > -static void split_pmd(pmd_t *pmd, pte_t *pte) > +void split_pmd(pmd_t *pmd, pte_t *pte) > { > unsigned long pfn = pmd_pfn(*pmd); > unsigned long addr = pfn << PAGE_SHIFT; > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index e47ed1c..48a4ce9 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -15,9 +15,12 @@ > #include <linux/module.h> > #include <linux/sched.h> > > +#include <asm/pgalloc.h> > #include <asm/pgtable.h> > #include <asm/tlbflush.h> > > +#include "mm.h" > + > struct page_change_data { > pgprot_t set_mask; > pgprot_t clear_mask; > @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, > return 0; > } > > +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR > +static int check_address(unsigned long addr) > +{ > + pgd_t *pgd = pgd_offset_k(addr); > + pud_t *pud; > + pmd_t *pmd; > + pte_t *pte; > + int ret = -EFAULT; > + > + if (pgd_none(*pgd)) > + goto out; > + > + pud = pud_offset(pgd, addr); > + if (pud_none(*pud)) > + goto out; > + > + if (pud_sect(*pud)) { > + pmd = pmd_alloc_one(&init_mm, addr); > + if (!pmd) { > + ret = -ENOMEM; > + goto out; > + } > + split_pud(pud, pmd); > + pud_populate(&init_mm, pud, pmd); > + } > + > + pmd = pmd_offset(pud, addr); > + if (pmd_none(*pmd)) > + goto out; > + > + if (pmd_sect(*pmd)) { > + pte = pte_alloc_one_kernel(&init_mm, addr); > + if (!pte) { > + ret = -ENOMEM; > + goto out; > + } > + split_pmd(pmd, pte); > + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE); > + } > + > + pte = pte_offset_kernel(pmd, addr); > + if (pte_none(*pte)) > + goto out; > + > + flush_tlb_all(); > + ret = 0; > + > +out: > + return ret; > +} > +#else > +static int check_address(unsigned long addr) > +{ > + if (addr < MODULES_VADDR || addr >= MODULES_END) > + return -EINVAL; > + > + return 0; > +} > +#endif > + > static int change_memory_common(unsigned long addr, int numpages, > pgprot_t set_mask, pgprot_t clear_mask) > { > @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int numpages, > int ret; > struct page_change_data data; > > + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr)) > + return -EINVAL; > + > if (!IS_ALIGNED(addr, PAGE_SIZE)) { > start &= PAGE_MASK; > end = start + size; > WARN_ON_ONCE(1); > } > > - if (start < MODULES_VADDR || start >= MODULES_END) > - return -EINVAL; > - > - if (end < MODULES_VADDR || end >= MODULES_END) > - return -EINVAL; > + ret = check_address(addr); > + if (ret) > + return ret; > > data.set_mask = set_mask; > data.clear_mask = clear_mask; Hi Laura This patch seems vaild, but I didn't feel very reasonable. Because of the large page to make TLB performance better, just split it if it is necessary.therefore, I think the first thing we try to keep it, if they fail ,and then to split. thanks zhongjiang ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules 2015-11-03 21:48 ` [PATCH 2/2] arm64: Allow changing of attributes outside of modules Laura Abbott 2015-11-04 3:17 ` zhong jiang @ 2015-11-05 7:44 ` Ard Biesheuvel 2015-11-06 1:35 ` Laura Abbott [not found] ` <563974A8.3060306@huawei.com> 2015-11-05 11:29 ` Xishi Qiu 3 siblings, 1 reply; 15+ messages in thread From: Ard Biesheuvel @ 2015-11-05 7:44 UTC (permalink / raw) To: Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Mark Rutland On 3 November 2015 at 22:48, Laura Abbott <labbott@fedoraproject.org> wrote: > > Currently, the set_memory_* functions that are implemented for arm64 > are restricted to module addresses only. This was mostly done > because arm64 maps normal zone memory with larger page sizes to > improve TLB performance. This has the side effect though of making it > difficult to adjust attributes at the PAGE_SIZE granularity. There are > an increasing number of use cases related to security where it is > necessary to change the attributes of kernel memory. Add functionality > to the page attribute changing code under a Kconfig to let systems > designers decide if they want to make the trade off of security for TLB > pressure. > > Signed-off-by: Laura Abbott <labbott@fedoraproject.org> > --- > arch/arm64/Kconfig.debug | 11 +++++++ > arch/arm64/mm/mm.h | 3 ++ > arch/arm64/mm/mmu.c | 2 +- > arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- > 4 files changed, 84 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug > index d6285ef..abc6922 100644 > --- a/arch/arm64/Kconfig.debug > +++ b/arch/arm64/Kconfig.debug > @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA > > If in doubt, say N > > +config DEBUG_CHANGE_PAGEATTR I don't think this belongs in Kconfig.debug, and I don't think this should default to off. We know we currently have no users of set_memory_xx() in arch/arm64 that operate on linear mapping addresses, so we will not introduce any performance regressions by adding this functionality now. By putting this feature behind a debug option, every newly introduced call set_memory_xx() that operates on linear or vmalloc() addresses needs to deal with -EINVAL (or depend on DEBUG_CHANGE_PAGEATTR), or it may error out at runtime if the feature is not enabled. > + bool "Allow all kernel memory to have attributes changed" > + help > + If this option is selected, APIs that change page attributes > + (RW <-> RO, X <-> NX) will be valid for all memory mapped in > + the kernel space. The trade off is that there may be increased > + TLB pressure from finer grained page mapping. Turn on this option > + if performance is more important than security > + This is backwards > + If in doubt, say N > + > source "drivers/hwtracing/coresight/Kconfig" > > endmenu [...] > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index e47ed1c..48a4ce9 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c [...] > @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int numpages, > int ret; > struct page_change_data data; > > + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr)) > + return -EINVAL; > + Doesn't this exclude the module area? > if (!IS_ALIGNED(addr, PAGE_SIZE)) { > start &= PAGE_MASK; > end = start + size; > WARN_ON_ONCE(1); > } > > - if (start < MODULES_VADDR || start >= MODULES_END) > - return -EINVAL; > - > - if (end < MODULES_VADDR || end >= MODULES_END) > - return -EINVAL; > + ret = check_address(addr); > + if (ret) > + return ret; > > data.set_mask = set_mask; > data.clear_mask = clear_mask; > -- > 2.4.3 > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules 2015-11-05 7:44 ` Ard Biesheuvel @ 2015-11-06 1:35 ` Laura Abbott 0 siblings, 0 replies; 15+ messages in thread From: Laura Abbott @ 2015-11-06 1:35 UTC (permalink / raw) To: Ard Biesheuvel, Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Xishi Qiu, Mark Rutland On 11/04/2015 11:44 PM, Ard Biesheuvel wrote: > On 3 November 2015 at 22:48, Laura Abbott <labbott@fedoraproject.org> wrote: >> >> Currently, the set_memory_* functions that are implemented for arm64 >> are restricted to module addresses only. This was mostly done >> because arm64 maps normal zone memory with larger page sizes to >> improve TLB performance. This has the side effect though of making it >> difficult to adjust attributes at the PAGE_SIZE granularity. There are >> an increasing number of use cases related to security where it is >> necessary to change the attributes of kernel memory. Add functionality >> to the page attribute changing code under a Kconfig to let systems >> designers decide if they want to make the trade off of security for TLB >> pressure. >> >> Signed-off-by: Laura Abbott <labbott@fedoraproject.org> >> --- >> arch/arm64/Kconfig.debug | 11 +++++++ >> arch/arm64/mm/mm.h | 3 ++ >> arch/arm64/mm/mmu.c | 2 +- >> arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- >> 4 files changed, 84 insertions(+), 6 deletions(-) >> >> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug >> index d6285ef..abc6922 100644 >> --- a/arch/arm64/Kconfig.debug >> +++ b/arch/arm64/Kconfig.debug >> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA >> >> If in doubt, say N >> >> +config DEBUG_CHANGE_PAGEATTR > > I don't think this belongs in Kconfig.debug, and I don't think this > should default to off. > > We know we currently have no users of set_memory_xx() in arch/arm64 > that operate on linear mapping addresses, so we will not introduce any > performance regressions by adding this functionality now. By putting > this feature behind a debug option, every newly introduced call > set_memory_xx() that operates on linear or vmalloc() addresses needs > to deal with -EINVAL (or depend on DEBUG_CHANGE_PAGEATTR), or it may > error out at runtime if the feature is not enabled. > I stuck it in Kconfig.debug to have it match with the rest of the module and DEBUG_RODATA options. I'll pull it out. >> + bool "Allow all kernel memory to have attributes changed" >> + help >> + If this option is selected, APIs that change page attributes >> + (RW <-> RO, X <-> NX) will be valid for all memory mapped in >> + the kernel space. The trade off is that there may be increased >> + TLB pressure from finer grained page mapping. Turn on this option >> + if performance is more important than security >> + > > This is backwards > >> + If in doubt, say N >> + >> source "drivers/hwtracing/coresight/Kconfig" >> >> endmenu > > [...] > >> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c >> index e47ed1c..48a4ce9 100644 >> --- a/arch/arm64/mm/pageattr.c >> +++ b/arch/arm64/mm/pageattr.c > > [...] > >> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int numpages, >> int ret; >> struct page_change_data data; >> >> + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr)) >> + return -EINVAL; >> + > > Doesn't this exclude the module area? > >> if (!IS_ALIGNED(addr, PAGE_SIZE)) { >> start &= PAGE_MASK; >> end = start + size; >> WARN_ON_ONCE(1); >> } >> >> - if (start < MODULES_VADDR || start >= MODULES_END) >> - return -EINVAL; >> - >> - if (end < MODULES_VADDR || end >= MODULES_END) >> - return -EINVAL; >> + ret = check_address(addr); >> + if (ret) >> + return ret; >> >> data.set_mask = set_mask; >> data.clear_mask = clear_mask; >> -- >> 2.4.3 >> ^ permalink raw reply [flat|nested] 15+ messages in thread
[parent not found: <563974A8.3060306@huawei.com>]
[parent not found: <563A4A74.60900@redhat.com>]
* Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules [not found] ` <563A4A74.60900@redhat.com> @ 2015-11-05 11:10 ` Xishi Qiu 2015-11-06 1:11 ` Laura Abbott 0 siblings, 1 reply; 15+ messages in thread From: Xishi Qiu @ 2015-11-05 11:10 UTC (permalink / raw) To: Laura Abbott Cc: zhong jiang, Laura Abbott, Kees Cook, Catalin Marinas, Will Deacon, linux-arm-kernel, LKML, Ard Biesheuvel, Mark Rutland On 2015/11/5 2:12, Laura Abbott wrote: > On 11/03/2015 06:59 PM, zhong jiang wrote: >> >> Hi Laura >> >> This patch seems vaild, but I didn't feel very reasonable. >> Because of the large page to make TLB performance better, just >> split it if it is necessary.therefore, I think the first thing >> we try to keep it, if they fail ,and then to split. >> > > I'm not quite sure I understand the request. We know we are going > to have to have something mapped at page size granularity so we > are going to have to break down the larger mappings no matter > what. Can you explain a bit more where you think we could try to > keep the larger mappings? > Hi Laura, He means like this, if the range is aligned with large page, we need not to split it, just change the flag. I have one more question. alloc_init_pud() ... if (!pud_none(old_pud)) ... memblock_free(table, PAGE_SIZE); ... Here we will free the memory from pmd page, so why not free more memory from 512 pte pages, if the 512 old pmds are not none? Thanks, Xishi Qiu > At least two things I noticed looking at this again though: > - This only splits the start address. If the range happens > to overlap a pud/pmd this won't work. I'll address that in v2 > - We're always flushing the TLB even if nothing changed. Was > this what you were referring to? > > >> thanks >> zhongjiang >> >> > > Thanks, > Laura > > > . > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules 2015-11-05 11:10 ` Xishi Qiu @ 2015-11-06 1:11 ` Laura Abbott 0 siblings, 0 replies; 15+ messages in thread From: Laura Abbott @ 2015-11-06 1:11 UTC (permalink / raw) To: Xishi Qiu Cc: zhong jiang, Laura Abbott, Kees Cook, Catalin Marinas, Will Deacon, linux-arm-kernel, LKML, Ard Biesheuvel, Mark Rutland On 11/05/2015 03:10 AM, Xishi Qiu wrote: > On 2015/11/5 2:12, Laura Abbott wrote: > >> On 11/03/2015 06:59 PM, zhong jiang wrote: >>> >>> Hi Laura >>> >>> This patch seems vaild, but I didn't feel very reasonable. >>> Because of the large page to make TLB performance better, just >>> split it if it is necessary.therefore, I think the first thing >>> we try to keep it, if they fail ,and then to split. >>> >> >> I'm not quite sure I understand the request. We know we are going >> to have to have something mapped at page size granularity so we >> are going to have to break down the larger mappings no matter >> what. Can you explain a bit more where you think we could try to >> keep the larger mappings? >> > > Hi Laura, > > He means like this, if the range is aligned with large page, we > need not to split it, just change the flag. > This complicates the logic for doing the update. Apply to page range nicely walks across all the 4K pages and does the update. It looks like x86 does the check to keep the large pages though so I'll give it some thought. > I have one more question. > > alloc_init_pud() > ... > if (!pud_none(old_pud)) > ... > memblock_free(table, PAGE_SIZE); > ... > > Here we will free the memory from pmd page, so why not free > more memory from 512 pte pages, if the 512 old pmds are not none? > It would be nice to reclaim the memory but I'm not sure if that will work. The memory was allocated before any of the regular kernel data structures were set up. It's not clear if giving the pages back to the buddy allocator would actually work. I'll take a look though. > Thanks, > Xishi Qiu > Thanks, Laura ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules 2015-11-03 21:48 ` [PATCH 2/2] arm64: Allow changing of attributes outside of modules Laura Abbott ` (2 preceding siblings ...) [not found] ` <563974A8.3060306@huawei.com> @ 2015-11-05 11:29 ` Xishi Qiu 3 siblings, 0 replies; 15+ messages in thread From: Xishi Qiu @ 2015-11-05 11:29 UTC (permalink / raw) To: Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel, Kees Cook, Ard Biesheuvel, Mark Rutland On 2015/11/4 5:48, Laura Abbott wrote: > > Currently, the set_memory_* functions that are implemented for arm64 > are restricted to module addresses only. This was mostly done > because arm64 maps normal zone memory with larger page sizes to > improve TLB performance. This has the side effect though of making it > difficult to adjust attributes at the PAGE_SIZE granularity. There are > an increasing number of use cases related to security where it is > necessary to change the attributes of kernel memory. Add functionality > to the page attribute changing code under a Kconfig to let systems > designers decide if they want to make the trade off of security for TLB > pressure. > > Signed-off-by: Laura Abbott <labbott@fedoraproject.org> > --- > arch/arm64/Kconfig.debug | 11 +++++++ > arch/arm64/mm/mm.h | 3 ++ > arch/arm64/mm/mmu.c | 2 +- > arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- > 4 files changed, 84 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug > index d6285ef..abc6922 100644 > --- a/arch/arm64/Kconfig.debug > +++ b/arch/arm64/Kconfig.debug > @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA > > If in doubt, say N > > +config DEBUG_CHANGE_PAGEATTR > + bool "Allow all kernel memory to have attributes changed" > + help > + If this option is selected, APIs that change page attributes > + (RW <-> RO, X <-> NX) will be valid for all memory mapped in > + the kernel space. The trade off is that there may be increased > + TLB pressure from finer grained page mapping. Turn on this option > + if performance is more important than security > + > + If in doubt, say N > + > source "drivers/hwtracing/coresight/Kconfig" > > endmenu > diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h > index ef47d99..7b0dcc4 100644 > --- a/arch/arm64/mm/mm.h > +++ b/arch/arm64/mm/mm.h > @@ -1,3 +1,6 @@ > extern void __init bootmem_init(void); > > void fixup_init(void); > + > +void split_pud(pud_t *old_pud, pmd_t *pmd); > +void split_pmd(pmd_t *pmd, pte_t *pte); > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index ff41efa..cefad2d 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz) > /* > * remap a PMD into pages > */ > -static void split_pmd(pmd_t *pmd, pte_t *pte) > +void split_pmd(pmd_t *pmd, pte_t *pte) > { > unsigned long pfn = pmd_pfn(*pmd); > unsigned long addr = pfn << PAGE_SHIFT; > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index e47ed1c..48a4ce9 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -15,9 +15,12 @@ > #include <linux/module.h> > #include <linux/sched.h> > > +#include <asm/pgalloc.h> > #include <asm/pgtable.h> > #include <asm/tlbflush.h> > > +#include "mm.h" > + > struct page_change_data { > pgprot_t set_mask; > pgprot_t clear_mask; > @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, > return 0; > } > > +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR > +static int check_address(unsigned long addr) > +{ > + pgd_t *pgd = pgd_offset_k(addr); > + pud_t *pud; > + pmd_t *pmd; > + pte_t *pte; > + int ret = -EFAULT; > + > + if (pgd_none(*pgd)) > + goto out; > + > + pud = pud_offset(pgd, addr); > + if (pud_none(*pud)) > + goto out; > + > + if (pud_sect(*pud)) { > + pmd = pmd_alloc_one(&init_mm, addr); > + if (!pmd) { > + ret = -ENOMEM; > + goto out; > + } > + split_pud(pud, pmd); > + pud_populate(&init_mm, pud, pmd); > + } > + > + pmd = pmd_offset(pud, addr); > + if (pmd_none(*pmd)) > + goto out; > + > + if (pmd_sect(*pmd)) { > + pte = pte_alloc_one_kernel(&init_mm, addr); > + if (!pte) { > + ret = -ENOMEM; > + goto out; > + } > + split_pmd(pmd, pte); > + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE); > + } > + > + pte = pte_offset_kernel(pmd, addr); > + if (pte_none(*pte)) > + goto out; > + > + flush_tlb_all(); > + ret = 0; > + > +out: > + return ret; > +} > +#else > +static int check_address(unsigned long addr) > +{ > + if (addr < MODULES_VADDR || addr >= MODULES_END) > + return -EINVAL; > + > + return 0; > +} > +#endif > + > static int change_memory_common(unsigned long addr, int numpages, > pgprot_t set_mask, pgprot_t clear_mask) > { > @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int numpages, > int ret; > struct page_change_data data; > > + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr)) > + return -EINVAL; > + > if (!IS_ALIGNED(addr, PAGE_SIZE)) { > start &= PAGE_MASK; > end = start + size; > WARN_ON_ONCE(1); > } > > - if (start < MODULES_VADDR || start >= MODULES_END) > - return -EINVAL; > - > - if (end < MODULES_VADDR || end >= MODULES_END) > - return -EINVAL; > + ret = check_address(addr); Hi Laura, We only split the first page here, if the numpages is not 1, it's wrong, right? Thanks, Xishi Qiu > + if (ret) > + return ret; > > data.set_mask = set_mask; > data.clear_mask = clear_mask; ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/2] Support for set_memory_* outside of module space 2015-11-03 21:48 [PATCH 0/2] Support for set_memory_* outside of module space Laura Abbott 2015-11-03 21:48 ` [PATCH 1/2] arm64: Get existing page protections in split_pmd Laura Abbott 2015-11-03 21:48 ` [PATCH 2/2] arm64: Allow changing of attributes outside of modules Laura Abbott @ 2015-11-03 23:42 ` Kees Cook 2015-11-04 18:51 ` Laura Abbott 2 siblings, 1 reply; 15+ messages in thread From: Kees Cook @ 2015-11-03 23:42 UTC (permalink / raw) To: Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, LKML, Xishi Qiu, Ard Biesheuvel, Mark Rutland On Tue, Nov 3, 2015 at 1:48 PM, Laura Abbott <labbott@fedoraproject.org> wrote: > > Hi, > > Based on a recent discussion[1] there is interest in having set_memory_* work > on kernel memory for security and other use cases. This patch adds the > ability for that to happen behind a kernel option. If this is welcome enough, > the Kconfig can be dropped. This has been briefly tested but not stress tested. > > Thanks, > Laura > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-October/382079.html > > Laura Abbott (2): > arm64: Get existing page protections in split_pmd > arm64: Allow changing of attributes outside of modules > > arch/arm64/Kconfig.debug | 11 +++++++ > arch/arm64/mm/mm.h | 3 ++ > arch/arm64/mm/mmu.c | 11 ++++--- > arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- > 4 files changed, 88 insertions(+), 11 deletions(-) This seems like the right thing to have. What's arm64 doing for the equivalent of x86 and arm's set_kernel_text_* functions? x86 and arm call their set_memory_* functions, for example. A quick examination shows mm/mmu.c is just doing it "by hand" in fixup_executable and mark_rodata_ro? Could those functions use the new set_memory_* ones? -Kees -- Kees Cook Chrome OS Security ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/2] Support for set_memory_* outside of module space 2015-11-03 23:42 ` [PATCH 0/2] Support for set_memory_* outside of module space Kees Cook @ 2015-11-04 18:51 ` Laura Abbott 2015-11-04 19:06 ` Kees Cook 0 siblings, 1 reply; 15+ messages in thread From: Laura Abbott @ 2015-11-04 18:51 UTC (permalink / raw) To: Kees Cook, Laura Abbott Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, LKML, Xishi Qiu, Ard Biesheuvel, Mark Rutland On 11/03/2015 03:42 PM, Kees Cook wrote: > On Tue, Nov 3, 2015 at 1:48 PM, Laura Abbott <labbott@fedoraproject.org> wrote: >> >> Hi, >> >> Based on a recent discussion[1] there is interest in having set_memory_* work >> on kernel memory for security and other use cases. This patch adds the >> ability for that to happen behind a kernel option. If this is welcome enough, >> the Kconfig can be dropped. This has been briefly tested but not stress tested. >> >> Thanks, >> Laura >> >> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-October/382079.html >> >> Laura Abbott (2): >> arm64: Get existing page protections in split_pmd >> arm64: Allow changing of attributes outside of modules >> >> arch/arm64/Kconfig.debug | 11 +++++++ >> arch/arm64/mm/mm.h | 3 ++ >> arch/arm64/mm/mmu.c | 11 ++++--- >> arch/arm64/mm/pageattr.c | 74 ++++++++++++++++++++++++++++++++++++++++++++---- >> 4 files changed, 88 insertions(+), 11 deletions(-) > > This seems like the right thing to have. What's arm64 doing for the > equivalent of x86 and arm's set_kernel_text_* functions? x86 and arm > call their set_memory_* functions, for example. A quick examination > shows mm/mmu.c is just doing it "by hand" in fixup_executable and > mark_rodata_ro? Could those functions use the new set_memory_* ones? > It looks like mark_rodata_ro could probably use the set_memory_ro. I'll have to test it out. Longer term, the page table setup code should probably just be pulled out into a common file. Do you know the code path in ftrace which would trigger the set_kernel_text_* If not, I'll go see if I can figure out if it's implemented yet on arm64. > -Kees > Thanks, Laura ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/2] Support for set_memory_* outside of module space 2015-11-04 18:51 ` Laura Abbott @ 2015-11-04 19:06 ` Kees Cook 0 siblings, 0 replies; 15+ messages in thread From: Kees Cook @ 2015-11-04 19:06 UTC (permalink / raw) To: Laura Abbott Cc: Laura Abbott, Catalin Marinas, Will Deacon, linux-arm-kernel, LKML, Xishi Qiu, Ard Biesheuvel, Mark Rutland On Wed, Nov 4, 2015 at 10:51 AM, Laura Abbott <labbott@redhat.com> wrote: > On 11/03/2015 03:42 PM, Kees Cook wrote: >> >> On Tue, Nov 3, 2015 at 1:48 PM, Laura Abbott <labbott@fedoraproject.org> >> wrote: >>> >>> >>> Hi, >>> >>> Based on a recent discussion[1] there is interest in having set_memory_* >>> work >>> on kernel memory for security and other use cases. This patch adds the >>> ability for that to happen behind a kernel option. If this is welcome >>> enough, >>> the Kconfig can be dropped. This has been briefly tested but not stress >>> tested. >>> >>> Thanks, >>> Laura >>> >>> [1] >>> http://lists.infradead.org/pipermail/linux-arm-kernel/2015-October/382079.html >>> >>> Laura Abbott (2): >>> arm64: Get existing page protections in split_pmd >>> arm64: Allow changing of attributes outside of modules >>> >>> arch/arm64/Kconfig.debug | 11 +++++++ >>> arch/arm64/mm/mm.h | 3 ++ >>> arch/arm64/mm/mmu.c | 11 ++++--- >>> arch/arm64/mm/pageattr.c | 74 >>> ++++++++++++++++++++++++++++++++++++++++++++---- >>> 4 files changed, 88 insertions(+), 11 deletions(-) >> >> >> This seems like the right thing to have. What's arm64 doing for the >> equivalent of x86 and arm's set_kernel_text_* functions? x86 and arm >> call their set_memory_* functions, for example. A quick examination >> shows mm/mmu.c is just doing it "by hand" in fixup_executable and >> mark_rodata_ro? Could those functions use the new set_memory_* ones? >> > > It looks like mark_rodata_ro could probably use the set_memory_ro. I'll > have to test it out. Longer term, the page table setup code should > probably just be pulled out into a common file. > > Do you know the code path in ftrace which would trigger the > set_kernel_text_* > If not, I'll go see if I can figure out if it's implemented yet on arm64. Looking now, it seems that things like jump label use aarch64_insn_patch_text, which ultimately call down to __aarch64_insn_write(), which is using the fixmap via FIX_TEXT_POKE0. -Kees -- Kees Cook Chrome OS Security ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2015-11-06 1:35 UTC | newest] Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-11-03 21:48 [PATCH 0/2] Support for set_memory_* outside of module space Laura Abbott 2015-11-03 21:48 ` [PATCH 1/2] arm64: Get existing page protections in split_pmd Laura Abbott 2015-11-05 7:07 ` Ard Biesheuvel 2015-11-05 10:15 ` Xishi Qiu 2015-11-06 1:24 ` Laura Abbott 2015-11-03 21:48 ` [PATCH 2/2] arm64: Allow changing of attributes outside of modules Laura Abbott 2015-11-04 3:17 ` zhong jiang 2015-11-05 7:44 ` Ard Biesheuvel 2015-11-06 1:35 ` Laura Abbott [not found] ` <563974A8.3060306@huawei.com> [not found] ` <563A4A74.60900@redhat.com> 2015-11-05 11:10 ` Xishi Qiu 2015-11-06 1:11 ` Laura Abbott 2015-11-05 11:29 ` Xishi Qiu 2015-11-03 23:42 ` [PATCH 0/2] Support for set_memory_* outside of module space Kees Cook 2015-11-04 18:51 ` Laura Abbott 2015-11-04 19:06 ` Kees Cook
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).