From: Kees Cook <keescook@chromium.org> To: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org>, Thomas Garnier <thgarnie@google.com>, Andy Lutomirski <luto@kernel.org>, x86@kernel.org, Borislav Petkov <bp@suse.de>, Baoquan He <bhe@redhat.com>, Yinghai Lu <yinghai@kernel.org>, Juergen Gross <jgross@suse.com>, Matt Fleming <matt@codeblueprint.co.uk>, Toshi Kani <toshi.kani@hpe.com>, Andrew Morton <akpm@linux-foundation.org>, Dan Williams <dan.j.williams@intel.com>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Dave Hansen <dave.hansen@linux.intel.com>, Xiao Guangrong <guangrong.xiao@linux.intel.com>, Martin Schwidefsky <schwidefsky@de.ibm.com>, "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>, Alexander Kuleshov <kuleshovmail@gmail.com>, Alexander Popov <alpopov@ptsecurity.com>, Dave Young <dyoung@redhat.com>, Joerg Roedel <jroedel@suse.de>, Lv Zheng <lv.zheng@intel.com>, Mark Salter <msalter@redhat.com>, Dmitry Vyukov <dvyukov@google.com>, Stephen Smalley <sds@tycho.nsa.gov>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Christian Borntraeger <borntraeger@de.ibm.com>, Jan Beulich <JBeulich@suse.com>, linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>, linux-doc@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v7 3/9] x86/mm: PUD VA support for physical mapping (x86_64) Date: Tue, 21 Jun 2016 17:47:00 -0700 [thread overview] Message-ID: <1466556426-32664-4-git-send-email-keescook@chromium.org> (raw) In-Reply-To: <1466556426-32664-1-git-send-email-keescook@chromium.org> From: Thomas Garnier <thgarnie@google.com> Minor change that allows early boot physical mapping of PUD level virtual addresses. The current implementation expects the virtual address to be PUD aligned. For KASLR memory randomization, we need to be able to randomize the offset used on the PUD table. It has no impact on current usage. Signed-off-by: Thomas Garnier <thgarnie@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> --- arch/x86/mm/init_64.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 6714712bd5da..7bf1ddb54537 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -465,7 +465,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, /* * Create PUD level page table mapping for physical addresses. The virtual - * and physical address have to be aligned at this level. + * and physical address do not have to be aligned at this level. KASLR can + * randomize virtual addresses up to this level. * It returns the last physical address mapped. */ static unsigned long __meminit @@ -474,14 +475,18 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, { unsigned long pages = 0, paddr_next; unsigned long paddr_last = paddr_end; - int i = pud_index(paddr); + unsigned long vaddr = (unsigned long)__va(paddr); + int i = pud_index(vaddr); for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) { - pud_t *pud = pud_page + pud_index(paddr); + pud_t *pud; pmd_t *pmd; pgprot_t prot = PAGE_KERNEL; + vaddr = (unsigned long)__va(paddr); + pud = pud_page + pud_index(vaddr); paddr_next = (paddr & PUD_MASK) + PUD_SIZE; + if (paddr >= paddr_end) { if (!after_bootmem && !e820_any_mapped(paddr & PUD_MASK, paddr_next, @@ -551,7 +556,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, /* * Create page table mapping for the physical memory for specific physical - * addresses. The virtual and physical addresses have to be aligned on PUD level + * addresses. The virtual and physical addresses have to be aligned on PMD level * down. It returns the last physical address mapped. */ unsigned long __meminit -- 2.7.4
WARNING: multiple messages have this Message-ID (diff)
From: Kees Cook <keescook@chromium.org> To: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org>, Thomas Garnier <thgarnie@google.com>, Andy Lutomirski <luto@kernel.org>, x86@kernel.org, Borislav Petkov <bp@suse.de>, Baoquan He <bhe@redhat.com>, Yinghai Lu <yinghai@kernel.org>, Juergen Gross <jgross@suse.com>, Matt Fleming <matt@codeblueprint.co.uk>, Toshi Kani <toshi.kani@hpe.com>, Andrew Morton <akpm@linux-foundation.org>, Dan Williams <dan.j.williams@intel.com>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Dave Hansen <dave.hansen@linux.intel.com>, Xiao Guangrong <guangrong.xiao@linux.intel.com>, Martin Schwidefsky <schwidefsky@de.ibm.com>, "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>, Alexander Kuleshov <kuleshovmail@gmail.com>, Alexander Popov <alpopov@ptsecurity.com>, Dave Young <dyoung@redhat.com>, Joerg Roedel <jroedel@suse.de>, Lv Zheng <lv.zheng@intel.com>, Mark Salter <msalter@redhat.com>, Dmitry Vyukov <dvyukov@google.com>, Stephen Smalley <sds@tycho.nsa.gov>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Christian Borntraeger <borntraeger@de.ibm.com>, Jan Beulich <JBeulich@suse.com>, linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>, linux-doc@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [kernel-hardening] [PATCH v7 3/9] x86/mm: PUD VA support for physical mapping (x86_64) Date: Tue, 21 Jun 2016 17:47:00 -0700 [thread overview] Message-ID: <1466556426-32664-4-git-send-email-keescook@chromium.org> (raw) In-Reply-To: <1466556426-32664-1-git-send-email-keescook@chromium.org> From: Thomas Garnier <thgarnie@google.com> Minor change that allows early boot physical mapping of PUD level virtual addresses. The current implementation expects the virtual address to be PUD aligned. For KASLR memory randomization, we need to be able to randomize the offset used on the PUD table. It has no impact on current usage. Signed-off-by: Thomas Garnier <thgarnie@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> --- arch/x86/mm/init_64.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 6714712bd5da..7bf1ddb54537 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -465,7 +465,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, /* * Create PUD level page table mapping for physical addresses. The virtual - * and physical address have to be aligned at this level. + * and physical address do not have to be aligned at this level. KASLR can + * randomize virtual addresses up to this level. * It returns the last physical address mapped. */ static unsigned long __meminit @@ -474,14 +475,18 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, { unsigned long pages = 0, paddr_next; unsigned long paddr_last = paddr_end; - int i = pud_index(paddr); + unsigned long vaddr = (unsigned long)__va(paddr); + int i = pud_index(vaddr); for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) { - pud_t *pud = pud_page + pud_index(paddr); + pud_t *pud; pmd_t *pmd; pgprot_t prot = PAGE_KERNEL; + vaddr = (unsigned long)__va(paddr); + pud = pud_page + pud_index(vaddr); paddr_next = (paddr & PUD_MASK) + PUD_SIZE; + if (paddr >= paddr_end) { if (!after_bootmem && !e820_any_mapped(paddr & PUD_MASK, paddr_next, @@ -551,7 +556,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, /* * Create page table mapping for the physical memory for specific physical - * addresses. The virtual and physical addresses have to be aligned on PUD level + * addresses. The virtual and physical addresses have to be aligned on PMD level * down. It returns the last physical address mapped. */ unsigned long __meminit -- 2.7.4
next prev parent reply other threads:[~2016-06-22 0:47 UTC|newest] Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-06-22 0:46 [PATCH v7 0/9] x86/mm: memory area address KASLR Kees Cook 2016-06-22 0:46 ` [kernel-hardening] " Kees Cook 2016-06-22 0:46 ` [PATCH v7 1/9] x86/mm: Refactor KASLR entropy functions Kees Cook 2016-06-22 0:46 ` [kernel-hardening] " Kees Cook 2016-07-08 20:33 ` [tip:x86/boot] " tip-bot for Thomas Garnier 2016-06-22 0:46 ` [PATCH v7 2/9] x86/mm: Update physical mapping variable names (x86_64) Kees Cook 2016-06-22 0:46 ` [kernel-hardening] " Kees Cook 2016-07-08 20:34 ` [tip:x86/boot] x86/mm: Update physical mapping variable names tip-bot for Thomas Garnier 2016-06-22 0:47 ` Kees Cook [this message] 2016-06-22 0:47 ` [kernel-hardening] [PATCH v7 3/9] x86/mm: PUD VA support for physical mapping (x86_64) Kees Cook 2016-07-08 20:34 ` [tip:x86/boot] x86/mm: Add PUD VA support for physical mapping tip-bot for Thomas Garnier 2016-06-22 0:47 ` [PATCH v7 4/9] x86/mm: Separate variable for trampoline PGD (x86_64) Kees Cook 2016-06-22 0:47 ` [kernel-hardening] " Kees Cook 2016-07-08 20:35 ` [tip:x86/boot] x86/mm: Separate variable for trampoline PGD tip-bot for Thomas Garnier 2016-06-22 0:47 ` [PATCH v7 5/9] x86/mm: Implement ASLR for kernel memory regions (x86_64) Kees Cook 2016-06-22 0:47 ` [kernel-hardening] " Kees Cook 2016-07-08 20:35 ` [tip:x86/boot] x86/mm: Implement ASLR for kernel memory regions tip-bot for Thomas Garnier 2016-06-22 0:47 ` [PATCH v7 6/9] x86/mm: Enable KASLR for physical mapping memory region (x86_64) Kees Cook 2016-06-22 0:47 ` [kernel-hardening] " Kees Cook 2016-07-08 20:35 ` [tip:x86/boot] x86/mm: Enable KASLR for physical mapping memory regions tip-bot for Thomas Garnier 2016-08-14 4:25 ` Brian Gerst 2016-08-14 23:26 ` Baoquan He 2016-08-16 11:31 ` Brian Gerst 2016-08-16 13:42 ` Borislav Petkov 2016-08-16 13:49 ` Borislav Petkov 2016-08-16 15:54 ` Borislav Petkov 2016-08-16 17:50 ` Borislav Petkov 2016-08-16 19:49 ` Kees Cook 2016-08-16 21:01 ` Borislav Petkov 2016-08-17 0:31 ` Brian Gerst 2016-08-17 9:11 ` Borislav Petkov 2016-08-17 10:19 ` Ingo Molnar 2016-08-17 11:33 ` Borislav Petkov 2016-08-18 10:49 ` [tip:x86/urgent] x86/microcode/AMD: Fix initrd loading with CONFIG_RANDOMIZE_MEMORY=y tip-bot for Borislav Petkov 2016-06-22 0:47 ` [PATCH v7 7/9] x86/mm: Enable KASLR for vmalloc memory region (x86_64) Kees Cook 2016-06-22 0:47 ` [kernel-hardening] " Kees Cook 2016-07-08 20:36 ` [tip:x86/boot] x86/mm: Enable KASLR for vmalloc memory regions tip-bot for Thomas Garnier 2016-06-22 0:47 ` [PATCH v7 8/9] x86/mm: Enable KASLR for vmemmap memory region (x86_64) Kees Cook 2016-06-22 0:47 ` [kernel-hardening] " Kees Cook 2016-06-22 0:47 ` [PATCH v7 9/9] x86/mm: Memory hotplug support for KASLR memory randomization (x86_64) Kees Cook 2016-06-22 0:47 ` [kernel-hardening] " Kees Cook 2016-07-08 20:36 ` [tip:x86/boot] x86/mm: Add memory hotplug support for KASLR memory randomization tip-bot for Thomas Garnier 2016-06-22 12:47 ` [kernel-hardening] [PATCH v7 0/9] x86/mm: memory area address KASLR Jason Cooper 2016-06-22 15:59 ` Thomas Garnier 2016-06-22 17:05 ` Kees Cook 2016-06-22 17:05 ` Kees Cook 2016-06-23 19:33 ` Jason Cooper 2016-06-23 19:33 ` Jason Cooper 2016-06-23 19:45 ` Sandy Harris 2016-06-23 19:59 ` Kees Cook 2016-06-23 19:59 ` Kees Cook 2016-06-23 20:19 ` Jason Cooper 2016-06-23 20:16 ` Jason Cooper 2016-06-23 19:58 ` Kees Cook 2016-06-23 19:58 ` Kees Cook 2016-06-23 20:05 ` Ard Biesheuvel 2016-06-23 20:05 ` Ard Biesheuvel 2016-06-24 1:11 ` Jason Cooper 2016-06-24 1:11 ` Jason Cooper 2016-06-24 10:54 ` Ard Biesheuvel 2016-06-24 10:54 ` Ard Biesheuvel 2016-06-24 16:02 ` devicetree random-seed properties, was: "Re: [PATCH v7 0/9] x86/mm: memory area address KASLR" Jason Cooper 2016-06-24 16:02 ` [kernel-hardening] " Jason Cooper 2016-06-24 19:04 ` Kees Cook 2016-06-24 19:04 ` [kernel-hardening] " Kees Cook 2016-06-24 20:40 ` Andy Lutomirski 2016-06-24 20:40 ` [kernel-hardening] " Andy Lutomirski 2016-06-30 21:48 ` Jason Cooper 2016-06-30 21:48 ` [kernel-hardening] " Jason Cooper 2016-06-30 21:56 ` Thomas Garnier 2016-06-30 21:48 ` Jason Cooper 2016-06-30 21:48 ` [kernel-hardening] " Jason Cooper 2016-07-07 22:24 ` [PATCH v7 0/9] x86/mm: memory area address KASLR Kees Cook 2016-07-07 22:24 ` [kernel-hardening] " Kees Cook
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1466556426-32664-4-git-send-email-keescook@chromium.org \ --to=keescook@chromium.org \ --cc=JBeulich@suse.com \ --cc=akpm@linux-foundation.org \ --cc=alpopov@ptsecurity.com \ --cc=aneesh.kumar@linux.vnet.ibm.com \ --cc=bhe@redhat.com \ --cc=boris.ostrovsky@oracle.com \ --cc=borntraeger@de.ibm.com \ --cc=bp@suse.de \ --cc=corbet@lwn.net \ --cc=dan.j.williams@intel.com \ --cc=dave.hansen@linux.intel.com \ --cc=dvyukov@google.com \ --cc=dyoung@redhat.com \ --cc=guangrong.xiao@linux.intel.com \ --cc=jgross@suse.com \ --cc=jroedel@suse.de \ --cc=kernel-hardening@lists.openwall.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=kuleshovmail@gmail.com \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=luto@kernel.org \ --cc=lv.zheng@intel.com \ --cc=matt@codeblueprint.co.uk \ --cc=mingo@kernel.org \ --cc=msalter@redhat.com \ --cc=schwidefsky@de.ibm.com \ --cc=sds@tycho.nsa.gov \ --cc=thgarnie@google.com \ --cc=toshi.kani@hpe.com \ --cc=x86@kernel.org \ --cc=yinghai@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.