From: Khalid Aziz <khalid.aziz@oracle.com> To: juergh@gmail.com, tycho@tycho.ws, jsteckli@amazon.de, ak@linux.intel.com, torvalds@linux-foundation.org, liran.alon@oracle.com, keescook@google.com, konrad.wilk@oracle.com Cc: Tycho Andersen <tycho@docker.com>, deepa.srinivasan@oracle.com, chris.hyser@oracle.com, tyhicks@canonical.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com, jcm@redhat.com, boris.ostrovsky@oracle.com, kanth.ghatraju@oracle.com, joao.m.martins@oracle.com, jmattson@google.com, pradeep.vincent@oracle.com, john.haxby@oracle.com, tglx@linutronix.de, kirill.shutemov@linux.intel.com, hch@lst.de, steven.sistare@oracle.com, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, Khalid Aziz <khalid.aziz@oracle.com> Subject: [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Date: Thu, 10 Jan 2019 14:09:41 -0700 [thread overview] Message-ID: <c9a409397fc608f7ae6297597d9ea3d21eeb3b38.1547153058.git.khalid.aziz@oracle.com> (raw) In-Reply-To: <cover.1547153058.git.khalid.aziz@oracle.com> In-Reply-To: <cover.1547153058.git.khalid.aziz@oracle.com> From: Tycho Andersen <tycho@docker.com> We need someting like this for testing XPFO. Since it's architecture specific, putting it in the test code is slightly awkward, so let's make it an arch-specific symbol and export it for use in LKDTM. v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case CC: linux-arm-kernel@lists.infradead.org CC: x86@kernel.org Signed-off-by: Tycho Andersen <tycho@docker.com> Tested-by: Marco Benatto <marco.antonio.780@gmail.com> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> --- arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++ include/linux/xpfo.h | 8 +++++++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c index d1f04ea533cd..bcdb2f2089d2 100644 --- a/arch/x86/mm/xpfo.c +++ b/arch/x86/mm/xpfo.c @@ -112,3 +112,60 @@ inline void xpfo_flush_kernel_tlb(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +/* Convert a user space virtual address to a physical address. + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in + * arch/x86/mm/pageattr.c + */ +phys_addr_t user_virt_to_phys(unsigned long addr) +{ + phys_addr_t phys_addr; + unsigned long offset; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset(current->mm, addr); + if (pgd_none(*pgd)) + return 0; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return 0; + + if (p4d_large(*p4d) || !p4d_present(*p4d)) { + phys_addr = (unsigned long)p4d_pfn(*p4d) << PAGE_SHIFT; + offset = addr & ~P4D_MASK; + goto out; + } + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return 0; + + if (pud_large(*pud) || !pud_present(*pud)) { + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; + offset = addr & ~PUD_MASK; + goto out; + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return 0; + + if (pmd_large(*pmd) || !pmd_present(*pmd)) { + phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT; + offset = addr & ~PMD_MASK; + goto out; + } + + pte = pte_offset_kernel(pmd, addr); + phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + offset = addr & ~PAGE_MASK; + +out: + return (phys_addr_t)(phys_addr | offset); +} +EXPORT_SYMBOL(user_virt_to_phys); diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 0c26836a24e1..d4b38ab8a633 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -23,6 +23,10 @@ struct page; #ifdef CONFIG_XPFO +#include <linux/dma-mapping.h> + +#include <linux/types.h> + extern struct page_ext_operations page_xpfo_ops; void set_kpte(void *kaddr, struct page *page, pgprot_t prot); @@ -48,6 +52,8 @@ void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, bool xpfo_enabled(void); +phys_addr_t user_virt_to_phys(unsigned long addr); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -72,6 +78,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size, static inline bool xpfo_enabled(void) { return false; } +static inline phys_addr_t user_virt_to_phys(unsigned long addr) { return 0; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ -- 2.17.1
WARNING: multiple messages have this Message-ID (diff)
From: Khalid Aziz <khalid.aziz@oracle.com> To: juergh@gmail.com, tycho@tycho.ws, jsteckli@amazon.de, ak@linux.intel.com, torvalds@linux-foundation.org, liran.alon@oracle.com, keescook@google.com, konrad.wilk@oracle.com Cc: Tycho Andersen <tycho@docker.com>, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, Khalid Aziz <khalid.aziz@oracle.com>, deepa.srinivasan@oracle.com, steven.sistare@oracle.com, joao.m.martins@oracle.com, boris.ostrovsky@oracle.com, x86@kernel.org, hch@lst.de, kanth.ghatraju@oracle.com, pradeep.vincent@oracle.com, jcm@redhat.com, tglx@linutronix.de, chris.hyser@oracle.com, linux-arm-kernel@lists.infradead.org, jmattson@google.com, andrew.cooper3@citrix.com, linux-kernel@vger.kernel.org, tyhicks@canonical.com, john.haxby@oracle.com, dwmw@amazon.co.uk, kirill.shutemov@linux.intel.com Subject: [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Date: Thu, 10 Jan 2019 14:09:41 -0700 [thread overview] Message-ID: <c9a409397fc608f7ae6297597d9ea3d21eeb3b38.1547153058.git.khalid.aziz@oracle.com> (raw) In-Reply-To: <cover.1547153058.git.khalid.aziz@oracle.com> In-Reply-To: <cover.1547153058.git.khalid.aziz@oracle.com> From: Tycho Andersen <tycho@docker.com> We need someting like this for testing XPFO. Since it's architecture specific, putting it in the test code is slightly awkward, so let's make it an arch-specific symbol and export it for use in LKDTM. v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case CC: linux-arm-kernel@lists.infradead.org CC: x86@kernel.org Signed-off-by: Tycho Andersen <tycho@docker.com> Tested-by: Marco Benatto <marco.antonio.780@gmail.com> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> --- arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++ include/linux/xpfo.h | 8 +++++++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c index d1f04ea533cd..bcdb2f2089d2 100644 --- a/arch/x86/mm/xpfo.c +++ b/arch/x86/mm/xpfo.c @@ -112,3 +112,60 @@ inline void xpfo_flush_kernel_tlb(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +/* Convert a user space virtual address to a physical address. + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in + * arch/x86/mm/pageattr.c + */ +phys_addr_t user_virt_to_phys(unsigned long addr) +{ + phys_addr_t phys_addr; + unsigned long offset; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset(current->mm, addr); + if (pgd_none(*pgd)) + return 0; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return 0; + + if (p4d_large(*p4d) || !p4d_present(*p4d)) { + phys_addr = (unsigned long)p4d_pfn(*p4d) << PAGE_SHIFT; + offset = addr & ~P4D_MASK; + goto out; + } + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return 0; + + if (pud_large(*pud) || !pud_present(*pud)) { + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; + offset = addr & ~PUD_MASK; + goto out; + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return 0; + + if (pmd_large(*pmd) || !pmd_present(*pmd)) { + phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT; + offset = addr & ~PMD_MASK; + goto out; + } + + pte = pte_offset_kernel(pmd, addr); + phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + offset = addr & ~PAGE_MASK; + +out: + return (phys_addr_t)(phys_addr | offset); +} +EXPORT_SYMBOL(user_virt_to_phys); diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 0c26836a24e1..d4b38ab8a633 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -23,6 +23,10 @@ struct page; #ifdef CONFIG_XPFO +#include <linux/dma-mapping.h> + +#include <linux/types.h> + extern struct page_ext_operations page_xpfo_ops; void set_kpte(void *kaddr, struct page *page, pgprot_t prot); @@ -48,6 +52,8 @@ void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, bool xpfo_enabled(void); +phys_addr_t user_virt_to_phys(unsigned long addr); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -72,6 +78,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size, static inline bool xpfo_enabled(void) { return false; } +static inline phys_addr_t user_virt_to_phys(unsigned long addr) { return 0; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-01-10 21:11 UTC|newest] Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-01-10 21:09 [RFC PATCH v7 00/16] Add support for eXclusive Page Frame Ownership Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 01/16] mm: add MAP_HUGETLB support to vm_mmap Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 02/16] x86: always set IF before oopsing from page fault Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 03/16] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 04/16] swiotlb: Map the buffer if it was unmapped by XPFO Khalid Aziz 2019-01-23 14:16 ` Konrad Rzeszutek Wilk 2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz 2019-01-10 21:09 ` Khalid Aziz 2019-01-23 14:20 ` Konrad Rzeszutek Wilk 2019-01-23 14:20 ` Konrad Rzeszutek Wilk 2019-02-12 15:45 ` Khalid Aziz 2019-02-12 15:45 ` Khalid Aziz 2019-01-23 14:24 ` Konrad Rzeszutek Wilk 2019-01-23 14:24 ` Konrad Rzeszutek Wilk 2019-02-12 15:52 ` Khalid Aziz 2019-02-12 15:52 ` Khalid Aziz 2019-02-12 20:01 ` Laura Abbott 2019-02-12 20:01 ` Laura Abbott 2019-02-12 20:34 ` Khalid Aziz 2019-02-12 20:34 ` Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 06/16] xpfo: add primitives for mapping underlying memory Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz 2019-01-10 21:09 ` Khalid Aziz 2019-01-11 14:54 ` Tycho Andersen 2019-01-11 14:54 ` Tycho Andersen 2019-01-11 18:28 ` Khalid Aziz 2019-01-11 18:28 ` Khalid Aziz 2019-01-11 19:50 ` Tycho Andersen 2019-01-11 19:50 ` Tycho Andersen 2019-01-23 14:56 ` Konrad Rzeszutek Wilk 2019-01-23 14:56 ` Konrad Rzeszutek Wilk 2019-01-10 21:09 ` [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled Khalid Aziz 2019-01-10 21:09 ` Khalid Aziz 2019-01-10 21:09 ` Khalid Aziz [this message] 2019-01-10 21:09 ` [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Khalid Aziz 2019-01-23 15:03 ` Konrad Rzeszutek Wilk 2019-01-23 15:03 ` Konrad Rzeszutek Wilk 2019-01-10 21:09 ` [RFC PATCH v7 10/16] lkdtm: Add test for XPFO Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 11/16] mm, x86: omit TLB flushing by default for XPFO page table modifications Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 12/16] xpfo, mm: remove dependency on CONFIG_PAGE_EXTENSION Khalid Aziz 2019-01-16 15:01 ` Julian Stecklina 2019-01-10 21:09 ` [RFC PATCH v7 13/16] xpfo, mm: optimize spinlock usage in xpfo_kunmap Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 14/16] EXPERIMENTAL: xpfo, mm: optimize spin lock usage in xpfo_kmap Khalid Aziz 2019-01-17 0:18 ` Laura Abbott 2019-01-17 15:14 ` Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 15/16] xpfo, mm: Fix hang when booting with "xpfotlbflush" Khalid Aziz 2019-01-10 21:09 ` [RFC PATCH v7 16/16] xpfo, mm: Defer TLB flushes for non-current CPUs (x86 only) Khalid Aziz 2019-01-10 23:07 ` [RFC PATCH v7 00/16] Add support for eXclusive Page Frame Ownership Kees Cook 2019-01-10 23:07 ` Kees Cook 2019-01-11 0:20 ` Khalid Aziz 2019-01-11 0:44 ` Andy Lutomirski 2019-01-11 0:44 ` Andy Lutomirski 2019-01-11 21:45 ` Khalid Aziz 2019-01-10 23:40 ` Dave Hansen 2019-01-11 9:59 ` Peter Zijlstra 2019-01-11 18:21 ` Khalid Aziz 2019-01-11 20:42 ` Dave Hansen 2019-01-11 21:06 ` Andy Lutomirski 2019-01-11 21:06 ` Andy Lutomirski 2019-01-11 23:25 ` Khalid Aziz 2019-01-11 23:23 ` Khalid Aziz 2019-01-16 1:28 ` Laura Abbott 2019-01-16 14:56 ` Julian Stecklina 2019-01-16 15:16 ` Khalid Aziz 2019-01-17 23:38 ` Laura Abbott
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=c9a409397fc608f7ae6297597d9ea3d21eeb3b38.1547153058.git.khalid.aziz@oracle.com \ --to=khalid.aziz@oracle.com \ --cc=ak@linux.intel.com \ --cc=andrew.cooper3@citrix.com \ --cc=boris.ostrovsky@oracle.com \ --cc=chris.hyser@oracle.com \ --cc=deepa.srinivasan@oracle.com \ --cc=dwmw@amazon.co.uk \ --cc=hch@lst.de \ --cc=jcm@redhat.com \ --cc=jmattson@google.com \ --cc=joao.m.martins@oracle.com \ --cc=john.haxby@oracle.com \ --cc=jsteckli@amazon.de \ --cc=juergh@gmail.com \ --cc=kanth.ghatraju@oracle.com \ --cc=keescook@google.com \ --cc=kernel-hardening@lists.openwall.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=konrad.wilk@oracle.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liran.alon@oracle.com \ --cc=pradeep.vincent@oracle.com \ --cc=steven.sistare@oracle.com \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ --cc=tycho@docker.com \ --cc=tycho@tycho.ws \ --cc=tyhicks@canonical.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.