From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24E18C433FE for ; Wed, 9 Dec 2020 09:39:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0A8F233F8 for ; Wed, 9 Dec 2020 09:39:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0A8F233F8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 201EA6B00C5; Wed, 9 Dec 2020 04:39:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B4626B00C7; Wed, 9 Dec 2020 04:39:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A31B6B00C8; Wed, 9 Dec 2020 04:39:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id E7A166B00C5 for ; Wed, 9 Dec 2020 04:39:05 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B625A10F52 for ; Wed, 9 Dec 2020 09:39:05 +0000 (UTC) X-FDA: 77573245050.27.fuel25_0014b3d273ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 9BCE63D668 for ; Wed, 9 Dec 2020 09:39:05 +0000 (UTC) X-HE-Tag: fuel25_0014b3d273ee X-Filterd-Recvd-Size: 4937 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 09:39:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607506744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=62CVn8KHlXPzOpNMJKaf07SHFbrMhmdZjfzPAXyPBAY=; b=P9WzFYZIWLS9132BU5VRpgM9Yhhp+G3tWtZt/4Qy+cLTtZ9yNyLwCf0yd+APHiyT5YB13C sq/jLfMf/QdsqbjOq4/SGxuUSzyEojnyU3c42sdsoiLAedPUr5zVHqO+9h97ES+mauyaSf 0CwBqDqOcpvy1zNIMnHV8ur0fHrEYWQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-403-RK281QjPOfeFZWLjYFZrug-1; Wed, 09 Dec 2020 04:39:02 -0500 X-MC-Unique: RK281QjPOfeFZWLjYFZrug-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CDB9081CBF5; Wed, 9 Dec 2020 09:38:59 +0000 (UTC) Received: from [10.36.114.167] (ovpn-114-167.ams2.redhat.com [10.36.114.167]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8349A60BF1; Wed, 9 Dec 2020 09:38:57 +0000 (UTC) Subject: Re: [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps To: Joao Martins , linux-mm@kvack.org Cc: Dan Williams , Ira Weiny , linux-nvdimm@lists.01.org, Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton References: <20201208172901.17384-1-joao.m.martins@oracle.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <3d6923af-2684-cbdc-928c-2d849cc2062b@redhat.com> Date: Wed, 9 Dec 2020 10:38:56 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <20201208172901.17384-1-joao.m.martins@oracle.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 08.12.20 18:28, Joao Martins wrote: > Hey, > > This small series, attempts at minimizing 'struct page' overhead by > pursuing a similar approach as Muchun Song series "Free some vmemmap > pages of hugetlb page"[0] but applied to devmap/ZONE_DEVICE. > > [0] https://lore.kernel.org/linux-mm/20201130151838.11208-1-songmuchun@bytedance.com/ > > The link above describes it quite nicely, but the idea is to reuse tail > page vmemmap areas, particular the area which only describes tail pages. > So a vmemmap page describes 64 struct pages, and the first page for a given > ZONE_DEVICE vmemmap would contain the head page and 63 tail pages. The second > vmemmap page would contain only tail pages, and that's what gets reused across > the rest of the subsection/section. The bigger the page size, the bigger the > savings (2M hpage -> save 6 vmemmap pages; 1G hpage -> save 4094 vmemmap pages). > > In terms of savings, per 1Tb of memory, the struct page cost would go down > with compound pagemap: > > * with 2M pages we lose 4G instead of 16G (0.39% instead of 1.5% of total memory) > * with 1G pages we lose 8MB instead of 16G (0.0007% instead of 1.5% of total memory) > That's the dream :) > Along the way I've extended it past 'struct page' overhead *trying* to address a > few performance issues we knew about for pmem, specifically on the > {pin,get}_user_pages* function family with device-dax vmas which are really > slow even of the fast variants. THP is great on -fast variants but all except > hugetlbfs perform rather poorly on non-fast gup. > > So to summarize what the series does: > > Patches 1-5: Much like Muchun series, we reuse tail page areas across a given > page size (namely @align was referred by remaining memremap/dax code) and > enabling of memremap to initialize the ZONE_DEVICE pages as compound pages or a > given @align order. The main difference though, is that contrary to the hugetlbfs > series, there's no vmemmap for the area, because we are onlining it. Yeah, I'd argue that this case is a lot easier to handle. When the buddy is involved, things get more complicated. -- Thanks, David / dhildenb