From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752873AbdLMK7D (ORCPT ); Wed, 13 Dec 2017 05:59:03 -0500 Received: from mga05.intel.com ([192.55.52.43]:1708 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752728AbdLMK6j (ORCPT ); Wed, 13 Dec 2017 05:58:39 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,397,1508828400"; d="scan'208";a="17860575" From: "Kirill A. Shutemov" To: Andrew Morton Cc: Vlastimil Babka , Andrea Arcangeli , Michal Hocko , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Catalin Marinas , "Kirill A . Shutemov" Subject: [PATCHv4 04/12] arm64: Provide pmdp_establish() helper Date: Wed, 13 Dec 2017 13:57:48 +0300 Message-Id: <20171213105756.69879-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20171213105756.69879-1-kirill.shutemov@linux.intel.com> References: <20171213105756.69879-1-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Catalin Marinas We need an atomic way to setup pmd page table entry, avoiding races with CPU setting dirty/accessed bits. This is required to implement pmdp_invalidate() that doesn't lose these bits. Signed-off-by: Catalin Marinas Signed-off-by: Kirill A. Shutemov --- arch/arm64/include/asm/pgtable.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 149d05fb9421..116d610a2620 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -676,6 +676,13 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, { ptep_set_wrprotect(mm, address, (pte_t *)pmdp); } + +#define pmdp_establish pmdp_establish +static inline pmd_t pmdp_establish(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmdp, pmd_t pmd) +{ + return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd))); +} #endif extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; -- 2.15.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Kirill A. Shutemov" Subject: [PATCHv4 04/12] arm64: Provide pmdp_establish() helper Date: Wed, 13 Dec 2017 13:57:48 +0300 Message-ID: <20171213105756.69879-5-kirill.shutemov@linux.intel.com> References: <20171213105756.69879-1-kirill.shutemov@linux.intel.com> Return-path: In-Reply-To: <20171213105756.69879-1-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org To: Andrew Morton Cc: Vlastimil Babka , Andrea Arcangeli , Michal Hocko , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Catalin Marinas , "Kirill A . Shutemov" List-Id: linux-arch.vger.kernel.org From: Catalin Marinas We need an atomic way to setup pmd page table entry, avoiding races with CPU setting dirty/accessed bits. This is required to implement pmdp_invalidate() that doesn't lose these bits. Signed-off-by: Catalin Marinas Signed-off-by: Kirill A. Shutemov --- arch/arm64/include/asm/pgtable.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 149d05fb9421..116d610a2620 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -676,6 +676,13 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, { ptep_set_wrprotect(mm, address, (pte_t *)pmdp); } + +#define pmdp_establish pmdp_establish +static inline pmd_t pmdp_establish(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmdp, pmd_t pmd) +{ + return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd))); +} #endif extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; -- 2.15.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org