From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFD7FC4361B for ; Wed, 9 Dec 2020 09:52:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2483723407 for ; Wed, 9 Dec 2020 09:52:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2483723407 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 597236B00CD; Wed, 9 Dec 2020 04:52:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5485A6B00CF; Wed, 9 Dec 2020 04:52:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 436596B00D0; Wed, 9 Dec 2020 04:52:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 29C8D6B00CD for ; Wed, 9 Dec 2020 04:52:50 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E4CF48249980 for ; Wed, 9 Dec 2020 09:52:49 +0000 (UTC) X-FDA: 77573279658.25.title67_5d04659273ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id B45EF1805AD84 for ; Wed, 9 Dec 2020 09:52:49 +0000 (UTC) X-HE-Tag: title67_5d04659273ee X-Filterd-Recvd-Size: 9242 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 09:52:48 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id p21so487281pjv.0 for ; Wed, 09 Dec 2020 01:52:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=WYLUp063N+P4ILCANVlMNV7exC1SxroeMaAJDlYDQXE=; b=OKvmW2PGwcRhdZsMul0LgigQJ5zvB9mvhuKvkAgcfENaPzZ84f8evxicQRDynew0uF 7PzRWymZXkWLxf2f8i7UcyxlkgClLPOn4spRIwyZI3UAVBsyMHyG0TQWfadv85gDXxWK 1cnvC1Id7JMjzxxtskqzGP5mwEcvram5VrSVZ/zOjMldttbfZsWMA7ul2Cq82BwSlESg BPPNT0LpykmA+sAjjSjzW9e7aJ1CUnS6HbLAci9NeHphbIs+eXCVv+59n7+C0GD5iP+9 5vU9BlKVAcs+n4udWtO4fMKy6uMW5Yw0Z8RfL4UCmo1HIlCxpRQllSLQYNN4611Mtrfv HcOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=WYLUp063N+P4ILCANVlMNV7exC1SxroeMaAJDlYDQXE=; b=rPe4mTj+kBbxhmzyMYcOvwcHpahU4mi6KqmxPN6BJevzpoAAMzYzAdWELycXQjwse5 oNfH5id06yo0nbQ6SqnsTJnwzyqkpBN5K4mt0wu3jrumuuRPAKfNN56PFNpZAdn1Mm/A IguSBBbs+59bKpvRT6UCZv+ud7BxPhMvADlTjKk4sIUEdqAXuyEuu8thaVvWongveRqx VcCzlxM2y1xgQmImgWurYWpIU2iWKEoPR/3aY59os1RpyYVEsm74W2tK0xk21mgxLHn5 b3wlpVNXDhaRYEz8wVda+7/b1xrT6qN2eeZSN6nGb+wf8LtQBuf9977UYMbWGXw47Crg Zifw== X-Gm-Message-State: AOAM531xK9mzCZ56x0YeRIDZKRBVCg/LorzZaFxdnszFTbzGeT9BPUV2 bVwwBJ7yt+sdDQTvhm1LNXq7Xp75FYcInHCiK09IfA== X-Google-Smtp-Source: ABdhPJy1c43K8e2ufgvW8ThbcgNeT6Dhfk9iQncxofSb++wOFR9+7YfAvW+TFCS94Up2C3RquU/2domtD0UGwfv9yt0= X-Received: by 2002:a17:90a:ba88:: with SMTP id t8mr1438070pjr.229.1607507567152; Wed, 09 Dec 2020 01:52:47 -0800 (PST) MIME-Version: 1.0 References: <20201208172901.17384-1-joao.m.martins@oracle.com> In-Reply-To: <20201208172901.17384-1-joao.m.martins@oracle.com> From: Muchun Song Date: Wed, 9 Dec 2020 17:52:10 +0800 Message-ID: Subject: Re: [External] [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps To: Joao Martins Cc: Linux Memory Management List , Dan Williams , Ira Weiny , linux-nvdimm@lists.01.org, Matthew Wilcox , Jason Gunthorpe , Jane Chu , Mike Kravetz , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 9, 2020 at 1:32 AM Joao Martins wrote: > > Hey, > > This small series, attempts at minimizing 'struct page' overhead by > pursuing a similar approach as Muchun Song series "Free some vmemmap > pages of hugetlb page"[0] but applied to devmap/ZONE_DEVICE. > > [0] https://lore.kernel.org/linux-mm/20201130151838.11208-1-songmuchun@bytedance.com/ > Oh, well. It looks like you agree with my optimization approach and have fully understood. Also, welcome help me review that series if you have time. :) > The link above describes it quite nicely, but the idea is to reuse tail > page vmemmap areas, particular the area which only describes tail pages. > So a vmemmap page describes 64 struct pages, and the first page for a given > ZONE_DEVICE vmemmap would contain the head page and 63 tail pages. The second > vmemmap page would contain only tail pages, and that's what gets reused across > the rest of the subsection/section. The bigger the page size, the bigger the > savings (2M hpage -> save 6 vmemmap pages; 1G hpage -> save 4094 vmemmap pages). > > In terms of savings, per 1Tb of memory, the struct page cost would go down > with compound pagemap: > > * with 2M pages we lose 4G instead of 16G (0.39% instead of 1.5% of total memory) > * with 1G pages we lose 8MB instead of 16G (0.0007% instead of 1.5% of total memory) > > Along the way I've extended it past 'struct page' overhead *trying* to address a > few performance issues we knew about for pmem, specifically on the > {pin,get}_user_pages* function family with device-dax vmas which are really > slow even of the fast variants. THP is great on -fast variants but all except > hugetlbfs perform rather poorly on non-fast gup. > > So to summarize what the series does: > > Patches 1-5: Much like Muchun series, we reuse tail page areas across a given > page size (namely @align was referred by remaining memremap/dax code) and > enabling of memremap to initialize the ZONE_DEVICE pages as compound pages or a > given @align order. The main difference though, is that contrary to the hugetlbfs > series, there's no vmemmap for the area, because we are onlining it. IOW no > freeing of pages of already initialized vmemmap like the case for hugetlbfs, > which simplifies the logic (besides not being arch-specific). After these, > there's quite visible region bootstrap of pmem memmap given that we would > initialize fewer struct pages depending on the page size. > > NVDIMM namespace bootstrap improves from ~750ms to ~190ms/<=1ms on emulated NVDIMMs > with 2M and 1G respectivally. The net gain in improvement is similarly observed > in proportion when running on actual NVDIMMs. > > Patch 6 - 8: Optimize grabbing/release a page refcount changes given that we > are working with compound pages i.e. we do 1 increment/decrement to the head > page for a given set of N subpages compared as opposed to N individual writes. > {get,pin}_user_pages_fast() for zone_device with compound pagemap consequently > improves considerably, and unpin_user_pages() improves as well when passed a > set of consecutive pages: > > before after > (get_user_pages_fast 1G;2M page size) ~75k us -> ~3.2k ; ~5.2k us > (pin_user_pages_fast 1G;2M page size) ~125k us -> ~3.4k ; ~5.5k us > > The RDMA patch (patch 8/9) is to demonstrate the improvement for an existing > user. For unpin_user_pages() we have an additional test to demonstrate the > improvement. The test performs MR reg/unreg continuously and measuring its > rate for a given period. So essentially ib_mem_get and ib_mem_release being > stress tested which at the end of day means: pin_user_pages_longterm() and > unpin_user_pages() for a scatterlist: > > Before: > 159 rounds in 5.027 sec: 31617.923 usec / round (device-dax) > 466 rounds in 5.009 sec: 10748.456 usec / round (hugetlbfs) > > After: > 305 rounds in 5.010 sec: 16426.047 usec / round (device-dax) > 1073 rounds in 5.004 sec: 4663.622 usec / round (hugetlbfs) > > Patch 9: Improves {pin,get}_user_pages() and its longterm counterpart. It > is very experimental, and I imported most of follow_hugetlb_page(), except > that we do the same trick as gup-fast. In doing the patch I feel this batching > should live in follow_page_mask() and having that being changed to return a set > of pages/something-else when walking over PMD/PUDs for THP / devmap pages. This > patch then brings the previous test of mr reg/unreg (above) on parity > between device-dax and hugetlbfs. > > Some of the patches are a little fresh/WIP (specially patch 3 and 9) and we are > still running tests. Hence the RFC, asking for comments and general direction > of the work before continuing. > > Patches apply on top of linux-next tag next-20201208 (commit a9e26cb5f261). > > Comments and suggestions very much appreciated! > > Thanks, > Joao > > Joao Martins (9): > memremap: add ZONE_DEVICE support for compound pages > sparse-vmemmap: Consolidate arguments in vmemmap section populate > sparse-vmemmap: Reuse vmemmap areas for a given page size > mm/page_alloc: Reuse tail struct pages for compound pagemaps > device-dax: Compound pagemap support > mm/gup: Grab head page refcount once for group of subpages > mm/gup: Decrement head page once for group of subpages > RDMA/umem: batch page unpin in __ib_mem_release() > mm: Add follow_devmap_page() for devdax vmas > > drivers/dax/device.c | 54 ++++++--- > drivers/infiniband/core/umem.c | 25 +++- > include/linux/huge_mm.h | 4 + > include/linux/memory_hotplug.h | 16 ++- > include/linux/memremap.h | 2 + > include/linux/mm.h | 6 +- > mm/gup.c | 130 ++++++++++++++++----- > mm/huge_memory.c | 202 +++++++++++++++++++++++++++++++++ > mm/memory_hotplug.c | 13 ++- > mm/memremap.c | 13 ++- > mm/page_alloc.c | 28 ++++- > mm/sparse-vmemmap.c | 97 +++++++++++++--- > mm/sparse.c | 16 +-- > 13 files changed, 531 insertions(+), 75 deletions(-) > > -- > 2.17.1 > -- Yours, Muchun