From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3zbVnk6LPtzF10N for ; Wed, 7 Feb 2018 03:50:34 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w16Gn6Kp081718 for ; Tue, 6 Feb 2018 11:50:32 -0500 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2fye20f0px-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 06 Feb 2018 11:50:31 -0500 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 6 Feb 2018 16:50:28 -0000 From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, sergey.senozhatsky.work@gmail.com, Daniel Jordan Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: [PATCH v7 04/24] mm: Dont assume page-table invariance during faults Date: Tue, 6 Feb 2018 17:49:50 +0100 In-Reply-To: <1517935810-31177-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1517935810-31177-1-git-send-email-ldufour@linux.vnet.ibm.com> Message-Id: <1517935810-31177-5-git-send-email-ldufour@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Peter Zijlstra One of the side effects of speculating on faults (without holding mmap_sem) is that we can race with free_pgtables() and therefore we cannot assume the page-tables will stick around. Remove the reliance on the pte pointer. Signed-off-by: Peter Zijlstra (Intel) In most of the case pte_unmap_same() was returning 1, which meaning that do_swap_page() should do its processing. So in most of the case there will be no impact. Now regarding the case where pte_unmap_safe() was returning 0, and thus do_swap_page return 0 too, this happens when the page has already been swapped back. This may happen before do_swap_page() get called or while in the call to do_swap_page(). In that later case, the check done when swapin_readahead() returns will detect that case. The worst case would be that a page fault is occuring on 2 threads at the same time on the same swapped out page. In that case one thread will take much time looping in __read_swap_cache_async(). But in the regular page fault path, this is even worse since the thread would wait for semaphore to be released before starting anything. [Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT] Signed-off-by: Laurent Dufour --- mm/memory.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 5ec6433d6a5c..32b9eb77d95c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2288,6 +2288,7 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(apply_to_page_range); +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT /* * handle_pte_fault chooses page fault handler according to an entry which was * read non-atomically. Before making any commitment, on those architectures @@ -2311,6 +2312,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, pte_unmap(page_table); return same; } +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) { @@ -2898,11 +2900,13 @@ int do_swap_page(struct vm_fault *vmf) swapcache = page; } +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) { if (page) put_page(page); goto out; } +#endif entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { -- 2.7.4