From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx142.netapp.com ([216.240.21.19]:23908 "EHLO mx142.netapp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751750AbeCMRZh (ORCPT ); Tue, 13 Mar 2018 13:25:37 -0400 Subject: [RFC 1/7] mm: Add new vma flag VM_LOCAL_CPU To: linux-fsdevel References: CC: Ric Wheeler , Miklos Szeredi , Steve French , Steven Whitehouse , Jefff moyer , Sage Weil , Jan Kara , Amir Goldstein , Andy Rudof , Anna Schumaker , Amit Golander , Sagi Manole , Shachar Sharon From: Boaz Harrosh Message-ID: <443fea57-f165-6bed-8c8a-0a32f72b9cd2@netapp.com> Date: Tue, 13 Mar 2018 19:15:46 +0200 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On a call to mmap an mmap provider (like an FS) can put this flag on vma->vm_flags. This tells the Kernel that the vma will be used from a single core only and therefore invalidation of PTE(s) need not a wide CPU scheduling The motivation of this flag is the ZUFS project where we want to optimally map user-application buffers into a user-mode-server execute the operation and efficiently unmap. In this project we utilize a per-core server thread so everything is kept local. If we use the regular zap_ptes() API All CPU's are scheduled for the unmap, though in our case we know that we have only used a single core. The regular zap_ptes adds a very big latency on every operations and mostly kills the concurrency of the over all system. Because it imposes a serialization between all cores Some preliminary measurements on a 40 core machines: unpatched patched Threads Op/s Lat [us] Op/s Lat [us] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 185391 4.9 200799 4.6 2 197993 9.6 314321 5.9 4 310597 12.1 565574 6.6 8 546702 13.8 1113138 6.6 12 641728 17.2 1598451 6.8 18 744750 22.2 1648689 7.8 24 790805 28.3 1702285 8 36 849763 38.9 1783346 13.4 48 792000 44.6 1741873 17.4 [FIXME] We need to actually impose this policy. On very first pte_insert we should sample the used CPU_ID and on all susequent pte_inserts we need to make sure it is the same CPU_ID used. NOTE: That this vma is never used during a page_fault. It is always used in a synchronous way from an affinity set thread to a single core. Signed-off-by: Boaz Harrosh --- fs/proc/task_mmu.c | 3 +++ include/linux/mm.h | 3 +++ mm/memory.c | 2 +- 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 339e4c1..20786ba 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -681,6 +681,9 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) [ilog2(VM_PKEY_BIT2)] = "", [ilog2(VM_PKEY_BIT3)] = "", #endif +#ifdef CONFIG_ARCH_USES_HIGH_VMA_FLAGS + [ilog2(VM_LOCAL_CPU)] = "lc", +#endif }; size_t i; diff --git a/include/linux/mm.h b/include/linux/mm.h index ea818ff..02bb8b5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -226,6 +226,9 @@ extern unsigned int kobjsize(const void *objp); #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) +#define VM_LOCAL_CPU BIT(37) /* FIXME: Needs to move from here */ +#else /* ! CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ +#define VM_LOCAL_CPU 0 /* FIXME: Needs to move from here */ #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ #if defined(CONFIG_X86) diff --git a/mm/memory.c b/mm/memory.c index 7930046..7620ced 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1804,7 +1804,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr, goto out_unlock; entry = *pte; goto out_mkwrite; - } else + } else if (!(vma->vm_flags & VM_LOCAL_CPU)) goto out_unlock; } -- 2.5.5