linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	x86@kernel.org, Andy Lutomirski <luto@kernel.org>,
	"Rafael J . Wysocki" <rafael@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Michal Hocko <mhocko@suse.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH 1/7] mm: sparse: set/clear subsection bitmap when pages are onlined/offlined.
Date: Thu, 6 May 2021 21:14:42 +0200	[thread overview]
Message-ID: <146a1ec6-38b3-9724-b346-9bb6e7e24c72@redhat.com> (raw)
In-Reply-To: <71EE13C0-9CF7-4F1F-9C17-64500A854BD8@nvidia.com>

>> But glimpsing at patch #2, I'd rather stop right away digging deeper into this series :)
> 
> What is the issue of patch 2, which makes pageblock_order a variable all the time? BTW, patch 2 fixes a bug by exporting pageblock_order, since when HUGETLB_PAGE_SIZE_VARIABLE is set, virtio-mem will not see pageblock_order as a variable, which could happen for PPC_BOOK2S_64 with virtio-men enabled, right? Or is this an invalid combination?

virtio_mem is x86_64 only. aarch64 and s390x prototypes are available.

If I understood "Make pageblock_order a variable and
set it to the max of HUGETLB_PAGE_ORDER, MAX_ORDER - 1" correctly, 
you're setting the pageblock_order on x86_64 to 4M. That mean's you're 
no longer grouping for THP but MAX_ORDER - 1, which is not what we want. 
We want to optimize for THP.

Also, that would affect virtio-balloon with free page reporting (report 
only 4 MiB chunks not 2 MiB chunks).

> 
>>
>> I think what would really help is drafting a design of how it all could look like and then first discussing the high-level design, investigating how it could play along with all existing users, existing workloads, and existing use cases. Proposing such changes without a clear picture in mind and a high-level overview might give you some unpleasant reactions from some of the developers around here ;)
> 
> Please see my other email for a high-level design. Also I sent the patchset as a RFC to gather information on users, workloads, use cases I did not know about and I learnt a lot from your replies. :) Feedback is always welcome, but I am not sure why it needs to make people unpleasant. ;)

Rather the replies might be unpleasant ;)

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-05-06 19:14 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-06 15:26 [RFC PATCH 0/7] Memory hotplug/hotremove at subsection size Zi Yan
2021-05-06 15:26 ` [RFC PATCH 1/7] mm: sparse: set/clear subsection bitmap when pages are onlined/offlined Zi Yan
2021-05-06 17:48   ` David Hildenbrand
2021-05-06 19:03     ` Zi Yan
2021-05-06 19:14       ` David Hildenbrand [this message]
2021-05-06 15:26 ` [RFC PATCH 2/7] mm: set pageblock_order to the max of HUGETLB_PAGE_ORDER and MAX_ORDER-1 Zi Yan
2021-05-06 15:26 ` [RFC PATCH 3/7] mm: memory_hotplug: decouple memory_block size with section size Zi Yan
2021-05-06 15:26 ` [RFC PATCH 4/7] mm: pageblock: allow set/unset migratetype for partial pageblock Zi Yan
2021-05-06 15:26 ` [RFC PATCH 5/7] mm: memory_hotplug, sparse: enable memory hotplug/hotremove subsections Zi Yan
2021-05-06 15:26 ` [RFC PATCH 6/7] arch: x86: no MAX_ORDER exceeds SECTION_SIZE check for 32bit vdso Zi Yan
2021-05-06 15:26 ` [RFC PATCH 7/7] [not for merge] mm: increase SECTION_SIZE_BITS to 31 Zi Yan
2021-05-06 15:31 ` [RFC PATCH 0/7] Memory hotplug/hotremove at subsection size David Hildenbrand
2021-05-06 15:37   ` Zi Yan
2021-05-06 15:40     ` David Hildenbrand
2021-05-06 15:50       ` Zi Yan
2021-05-06 16:28         ` David Hildenbrand
2021-05-06 18:49           ` Zi Yan
2021-05-06 19:10             ` David Hildenbrand
2021-05-06 19:30               ` Matthew Wilcox
2021-05-06 19:38                 ` David Hildenbrand
2021-05-06 15:38   ` David Hildenbrand
2021-05-07 11:55   ` Michal Hocko
2021-05-07 14:00     ` David Hildenbrand
2021-05-10 14:36       ` Zi Yan
2021-05-12 16:14         ` David Hildenbrand
2021-06-02 15:56         ` Zi Yan
2021-06-14 11:32           ` David Hildenbrand
2021-05-06 15:42 ` Zi Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=146a1ec6-38b3-9724-b346-9bb6e7e24c72@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=benh@kernel.crashing.org \
    --cc=dan.j.williams@intel.com \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mpe@ellerman.id.au \
    --cc=osalvador@suse.de \
    --cc=rafael@kernel.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=rppt@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).