From: Peter Zijlstra <peterz@infradead.org> To: Khalid Aziz <khalid.aziz@oracle.com> Cc: juergh@gmail.com, tycho@tycho.ws, jsteckli@amazon.de, ak@linux.intel.com, torvalds@linux-foundation.org, liran.alon@oracle.com, keescook@google.com, akpm@linux-foundation.org, mhocko@suse.com, catalin.marinas@arm.com, will.deacon@arm.com, jmorris@namei.org, konrad.wilk@oracle.com, Juerg Haefliger <juerg.haefliger@canonical.com>, deepa.srinivasan@oracle.com, chris.hyser@oracle.com, tyhicks@canonical.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com, jcm@redhat.com, boris.ostrovsky@oracle.com, kanth.ghatraju@oracle.com, oao.m.martins@oracle.com, jmattson@google.com, pradeep.vincent@oracle.com, john.haxby@oracle.com, tglx@linutronix.de, kirill.shutemov@linux.intel.com, hch@lst.de, steven.sistare@oracle.com, labbott@redhat.com, luto@kernel.org, dave.hansen@intel.com, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Tycho Andersen <tycho@docker.com>, Marco Benatto <marco.antonio.780@gmail.com> Subject: Re: [RFC PATCH v8 03/14] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Date: Thu, 14 Feb 2019 11:56:31 +0100 [thread overview] Message-ID: <20190214105631.GJ32494@hirez.programming.kicks-ass.net> (raw) In-Reply-To: <8275de2a7e6b72d19b1cd2ec5d71a42c2c7dd6c5.1550088114.git.khalid.aziz@oracle.com> On Wed, Feb 13, 2019 at 05:01:26PM -0700, Khalid Aziz wrote: > static inline void *kmap_atomic(struct page *page) > { > + void *kaddr; > + > preempt_disable(); > pagefault_disable(); > + kaddr = page_address(page); > + xpfo_kmap(kaddr, page); > + return kaddr; > } > #define kmap_atomic_prot(page, prot) kmap_atomic(page) > > static inline void __kunmap_atomic(void *addr) > { > + xpfo_kunmap(addr, virt_to_page(addr)); > pagefault_enable(); > preempt_enable(); > } How is that supposed to work; IIRC kmap_atomic was supposed to be IRQ-safe. > +/* Per-page XPFO house-keeping data */ > +struct xpfo { > + unsigned long flags; /* Page state */ > + bool inited; /* Map counter and lock initialized */ What's sizeof(_Bool) ? Why can't you use a bit in that flags word? > + atomic_t mapcount; /* Counter for balancing map/unmap requests */ > + spinlock_t maplock; /* Lock to serialize map/unmap requests */ > +}; Without that bool, the structure would be 16 bytes on 64bit, which seems like a good number. > +void xpfo_kmap(void *kaddr, struct page *page) > +{ > + struct xpfo *xpfo; > + > + if (!static_branch_unlikely(&xpfo_inited)) > + return; > + > + xpfo = lookup_xpfo(page); > + > + /* > + * The page was allocated before page_ext was initialized (which means > + * it's a kernel page) or it's allocated to the kernel, so nothing to > + * do. > + */ > + if (!xpfo || unlikely(!xpfo->inited) || > + !test_bit(XPFO_PAGE_USER, &xpfo->flags)) > + return; > + > + spin_lock(&xpfo->maplock); > + > + /* > + * The page was previously allocated to user space, so map it back > + * into the kernel. No TLB flush required. > + */ > + if ((atomic_inc_return(&xpfo->mapcount) == 1) && > + test_and_clear_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags)) > + set_kpte(kaddr, page, PAGE_KERNEL); > + > + spin_unlock(&xpfo->maplock); > +} > +EXPORT_SYMBOL(xpfo_kmap); > + > +void xpfo_kunmap(void *kaddr, struct page *page) > +{ > + struct xpfo *xpfo; > + > + if (!static_branch_unlikely(&xpfo_inited)) > + return; > + > + xpfo = lookup_xpfo(page); > + > + /* > + * The page was allocated before page_ext was initialized (which means > + * it's a kernel page) or it's allocated to the kernel, so nothing to > + * do. > + */ > + if (!xpfo || unlikely(!xpfo->inited) || > + !test_bit(XPFO_PAGE_USER, &xpfo->flags)) > + return; > + > + spin_lock(&xpfo->maplock); > + > + /* > + * The page is to be allocated back to user space, so unmap it from the > + * kernel, flush the TLB and tag it as a user page. > + */ > + if (atomic_dec_return(&xpfo->mapcount) == 0) { > + WARN(test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags), > + "xpfo: unmapping already unmapped page\n"); > + set_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); > + set_kpte(kaddr, page, __pgprot(0)); > + xpfo_flush_kernel_tlb(page, 0); > + } > + > + spin_unlock(&xpfo->maplock); > +} > +EXPORT_SYMBOL(xpfo_kunmap); And these here things are most definitely not IRQ-safe.
WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org> To: Khalid Aziz <khalid.aziz@oracle.com> Cc: mhocko@suse.com, Tycho Andersen <tycho@docker.com>, kernel-hardening@lists.openwall.com, catalin.marinas@arm.com, will.deacon@arm.com, dave.hansen@intel.com, deepa.srinivasan@oracle.com, steven.sistare@oracle.com, tglx@linutronix.de, tycho@tycho.ws, ak@linux.intel.com, kirill.shutemov@linux.intel.com, x86@kernel.org, jmorris@namei.org, hch@lst.de, kanth.ghatraju@oracle.com, jsteckli@amazon.de, labbott@redhat.com, pradeep.vincent@oracle.com, konrad.wilk@oracle.com, jcm@redhat.com, liran.alon@oracle.com, luto@kernel.org, boris.ostrovsky@oracle.com, chris.hyser@oracle.com, linux-arm-kernel@lists.infradead.org, jmattson@google.com, Marco Benatto <marco.antonio.780@gmail.com>, linux-mm@kvack.org, juergh@gmail.com, andrew.cooper3@citrix.com, linux-kernel@vger.kernel.org, tyhicks@canonical.com, john.haxby@oracle.com, Juerg Haefliger <juerg.haefliger@canonical.com>, oao.m.martins@oracle.com, keescook@google.com, akpm@linux-foundation.org, torvalds@linux-foundation.org, dwmw@amazon.co.uk Subject: Re: [RFC PATCH v8 03/14] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Date: Thu, 14 Feb 2019 11:56:31 +0100 [thread overview] Message-ID: <20190214105631.GJ32494@hirez.programming.kicks-ass.net> (raw) In-Reply-To: <8275de2a7e6b72d19b1cd2ec5d71a42c2c7dd6c5.1550088114.git.khalid.aziz@oracle.com> On Wed, Feb 13, 2019 at 05:01:26PM -0700, Khalid Aziz wrote: > static inline void *kmap_atomic(struct page *page) > { > + void *kaddr; > + > preempt_disable(); > pagefault_disable(); > + kaddr = page_address(page); > + xpfo_kmap(kaddr, page); > + return kaddr; > } > #define kmap_atomic_prot(page, prot) kmap_atomic(page) > > static inline void __kunmap_atomic(void *addr) > { > + xpfo_kunmap(addr, virt_to_page(addr)); > pagefault_enable(); > preempt_enable(); > } How is that supposed to work; IIRC kmap_atomic was supposed to be IRQ-safe. > +/* Per-page XPFO house-keeping data */ > +struct xpfo { > + unsigned long flags; /* Page state */ > + bool inited; /* Map counter and lock initialized */ What's sizeof(_Bool) ? Why can't you use a bit in that flags word? > + atomic_t mapcount; /* Counter for balancing map/unmap requests */ > + spinlock_t maplock; /* Lock to serialize map/unmap requests */ > +}; Without that bool, the structure would be 16 bytes on 64bit, which seems like a good number. > +void xpfo_kmap(void *kaddr, struct page *page) > +{ > + struct xpfo *xpfo; > + > + if (!static_branch_unlikely(&xpfo_inited)) > + return; > + > + xpfo = lookup_xpfo(page); > + > + /* > + * The page was allocated before page_ext was initialized (which means > + * it's a kernel page) or it's allocated to the kernel, so nothing to > + * do. > + */ > + if (!xpfo || unlikely(!xpfo->inited) || > + !test_bit(XPFO_PAGE_USER, &xpfo->flags)) > + return; > + > + spin_lock(&xpfo->maplock); > + > + /* > + * The page was previously allocated to user space, so map it back > + * into the kernel. No TLB flush required. > + */ > + if ((atomic_inc_return(&xpfo->mapcount) == 1) && > + test_and_clear_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags)) > + set_kpte(kaddr, page, PAGE_KERNEL); > + > + spin_unlock(&xpfo->maplock); > +} > +EXPORT_SYMBOL(xpfo_kmap); > + > +void xpfo_kunmap(void *kaddr, struct page *page) > +{ > + struct xpfo *xpfo; > + > + if (!static_branch_unlikely(&xpfo_inited)) > + return; > + > + xpfo = lookup_xpfo(page); > + > + /* > + * The page was allocated before page_ext was initialized (which means > + * it's a kernel page) or it's allocated to the kernel, so nothing to > + * do. > + */ > + if (!xpfo || unlikely(!xpfo->inited) || > + !test_bit(XPFO_PAGE_USER, &xpfo->flags)) > + return; > + > + spin_lock(&xpfo->maplock); > + > + /* > + * The page is to be allocated back to user space, so unmap it from the > + * kernel, flush the TLB and tag it as a user page. > + */ > + if (atomic_dec_return(&xpfo->mapcount) == 0) { > + WARN(test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags), > + "xpfo: unmapping already unmapped page\n"); > + set_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); > + set_kpte(kaddr, page, __pgprot(0)); > + xpfo_flush_kernel_tlb(page, 0); > + } > + > + spin_unlock(&xpfo->maplock); > +} > +EXPORT_SYMBOL(xpfo_kunmap); And these here things are most definitely not IRQ-safe. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-02-14 10:57 UTC|newest] Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-02-14 0:01 [RFC PATCH v8 00/14] Add support for eXclusive Page Frame Ownership Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 01/14] mm: add MAP_HUGETLB support to vm_mmap Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 02/14] x86: always set IF before oopsing from page fault Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 03/14] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 10:56 ` Peter Zijlstra [this message] 2019-02-14 10:56 ` Peter Zijlstra 2019-02-14 16:15 ` Borislav Petkov 2019-02-14 16:15 ` Borislav Petkov 2019-02-14 17:19 ` Khalid Aziz 2019-02-14 17:19 ` Khalid Aziz 2019-02-14 17:13 ` Khalid Aziz 2019-02-14 17:13 ` Khalid Aziz 2019-02-14 19:08 ` Peter Zijlstra 2019-02-14 19:08 ` Peter Zijlstra 2019-02-14 19:58 ` Khalid Aziz 2019-02-14 19:58 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 04/14] swiotlb: Map the buffer if it was unmapped by XPFO Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 7:47 ` Christoph Hellwig 2019-02-14 7:47 ` Christoph Hellwig 2019-02-14 16:56 ` Khalid Aziz 2019-02-14 16:56 ` Khalid Aziz 2019-02-14 17:44 ` Christoph Hellwig 2019-02-14 17:44 ` Christoph Hellwig 2019-02-14 19:48 ` Khalid Aziz 2019-02-14 19:48 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 05/14] arm64/mm: Add support for XPFO Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 06/14] xpfo: add primitives for mapping underlying memory Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 07/14] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 15:54 ` Tycho Andersen 2019-02-14 15:54 ` Tycho Andersen 2019-02-14 17:29 ` Khalid Aziz 2019-02-14 17:29 ` Khalid Aziz 2019-02-14 23:49 ` Tycho Andersen 2019-02-14 23:49 ` Tycho Andersen 2019-02-14 0:01 ` [RFC PATCH v8 08/14] arm64/mm: disable section/contiguous mappings if XPFO is enabled Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-15 13:09 ` Mark Rutland 2019-02-15 13:09 ` Mark Rutland 2019-02-15 14:47 ` Khalid Aziz 2019-02-15 14:47 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 09/14] mm: add a user_virt_to_phys symbol Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 10/14] lkdtm: Add test for XPFO Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 11/14] xpfo, mm: remove dependency on CONFIG_PAGE_EXTENSION Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 12/14] xpfo, mm: optimize spinlock usage in xpfo_kunmap Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 13/14] xpfo, mm: Defer TLB flushes for non-current CPUs (x86 only) Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz 2019-02-14 17:42 ` Dave Hansen 2019-02-14 17:42 ` Dave Hansen 2019-02-14 19:57 ` Khalid Aziz 2019-02-14 19:57 ` Khalid Aziz 2019-02-14 0:01 ` [RFC PATCH v8 14/14] xpfo, mm: Optimize XPFO TLB flushes by batching them together Khalid Aziz 2019-02-14 0:01 ` Khalid Aziz
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190214105631.GJ32494@hirez.programming.kicks-ass.net \ --to=peterz@infradead.org \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=andrew.cooper3@citrix.com \ --cc=boris.ostrovsky@oracle.com \ --cc=catalin.marinas@arm.com \ --cc=chris.hyser@oracle.com \ --cc=dave.hansen@intel.com \ --cc=deepa.srinivasan@oracle.com \ --cc=dwmw@amazon.co.uk \ --cc=hch@lst.de \ --cc=jcm@redhat.com \ --cc=jmattson@google.com \ --cc=jmorris@namei.org \ --cc=john.haxby@oracle.com \ --cc=jsteckli@amazon.de \ --cc=juerg.haefliger@canonical.com \ --cc=juergh@gmail.com \ --cc=kanth.ghatraju@oracle.com \ --cc=keescook@google.com \ --cc=kernel-hardening@lists.openwall.com \ --cc=khalid.aziz@oracle.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=konrad.wilk@oracle.com \ --cc=labbott@redhat.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liran.alon@oracle.com \ --cc=luto@kernel.org \ --cc=marco.antonio.780@gmail.com \ --cc=mhocko@suse.com \ --cc=oao.m.martins@oracle.com \ --cc=pradeep.vincent@oracle.com \ --cc=steven.sistare@oracle.com \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ --cc=tycho@docker.com \ --cc=tycho@tycho.ws \ --cc=tyhicks@canonical.com \ --cc=will.deacon@arm.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.