From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1600CCA9EAF for ; Thu, 24 Oct 2019 16:51:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7D4F20659 for ; Thu, 24 Oct 2019 16:51:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7D4F20659 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 606CF6B0003; Thu, 24 Oct 2019 12:51:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B76F6B0006; Thu, 24 Oct 2019 12:51:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A68A6B0007; Thu, 24 Oct 2019 12:51:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 28DCB6B0003 for ; Thu, 24 Oct 2019 12:51:09 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id C1906181C8F1F for ; Thu, 24 Oct 2019 16:51:08 +0000 (UTC) X-FDA: 76079268216.02.slope71_7e54b39939916 X-HE-Tag: slope71_7e54b39939916 X-Filterd-Recvd-Size: 8271 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Thu, 24 Oct 2019 16:51:08 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F4221FB; Wed, 23 Oct 2019 20:53:04 -0700 (PDT) Received: from [10.162.43.133] (p8cg001049571a15.blr.arm.com [10.162.43.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 33C723F71A; Wed, 23 Oct 2019 20:52:45 -0700 (PDT) Subject: Re: [PATCH RFC v1 01/12] mm/memory_hotplug: Don't allow to online/offline memory blocks with holes To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Michal Hocko , Andrew Morton , kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, devel@driverdev.osuosl.org, xen-devel@lists.xenproject.org, x86@kernel.org, Alexander Duyck , Alexander Duyck , Alex Williamson , Allison Randal , Andy Lutomirski , "Aneesh Kumar K.V" , Anthony Yznaga , Ben Chan , Benjamin Herrenschmidt , Borislav Petkov , Boris Ostrovsky , Christophe Leroy , Cornelia Huck , Dan Carpenter , Dan Williams , Dave Hansen , Fabio Estevam , Greg Kroah-Hartman , Haiyang Zhang , "H. Peter Anvin" , Ingo Molnar , "Isaac J. Manjarres" , Jeremy Sowden , Jim Mattson , Joerg Roedel , Johannes Weiner , Juergen Gross , KarimAllah Ahmed , Kate Stewart , Kees Cook , "K. Y. Srinivasan" , Madhumitha Prabakaran , Matt Sickler , Mel Gorman , Michael Ellerman , Michal Hocko , Mike Rapoport , Mike Rapoport , Nicholas Piggin , Nishka Dasgupta , Oscar Salvador , Paolo Bonzini , Paul Mackerras , Paul Mackerras , Pavel Tatashin , Pavel Tatashin , Peter Zijlstra , Qian Cai , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Rob Springer , Sasha Levin , Sean Christopherson , =?UTF-8?Q?Simon_Sandstr=c3=b6m?= , Stefano Stabellini , Stephen Hemminger , Thomas Gleixner , Todd Poynor , Vandana BN , Vitaly Kuznetsov , Vlastimil Babka , Wanpeng Li , YueHaibing References: <20191022171239.21487-1-david@redhat.com> <20191022171239.21487-2-david@redhat.com> From: Anshuman Khandual Message-ID: <4aa3c72b-8991-9e43-80d7-a906ae79160b@arm.com> Date: Thu, 24 Oct 2019 09:23:16 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20191022171239.21487-2-david@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/22/2019 10:42 PM, David Hildenbrand wrote: > Our onlining/offlining code is unnecessarily complicated. Only memory > blocks added during boot can have holes. Hotplugged memory never has > holes. That memory is already online. Why hot plugged memory at runtime cannot have holes (e.g a semi bad DIMM). Currently, do we just abort adding that memory block if there are holes ? > > When we stop allowing to offline memory blocks with holes, we implicitly > stop to online memory blocks with holes. Reducing hotplug support for memory blocks with holes just to simplify the code. Is it worth ? > > This allows to simplify the code. For example, we no longer have to > worry about marking pages that fall into memory holes PG_reserved when > onlining memory. We can stop setting pages PG_reserved. Could not there be any other way of tracking these holes if not the page reserved bit. In the memory section itself and corresponding struct pages just remained poisoned ? Just wondering, might be all wrong here. > > Offlining memory blocks added during boot is usually not guranteed to work > either way. So stopping to do that (if anybody really used and tested That guarantee does not exist right now because how boot memory could have been used after boot not from a limitation of the memory hot remove itself. > this over the years) should not really hurt. For the use case of > offlining memory to unplug DIMMs, we should see no change. (holes on > DIMMs would be weird) Holes on DIMM could be due to HW errors affecting only parts of it. By not allowing such DIMM's hot add and remove, we are definitely reducing the scope of overall hotplug functionality. Is code simplification in itself is worth this reduction in functionality ? > > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Oscar Salvador > Cc: Pavel Tatashin > Cc: Dan Williams > Signed-off-by: David Hildenbrand > --- > mm/memory_hotplug.c | 26 ++++++++++++++++++++++++-- > 1 file changed, 24 insertions(+), 2 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 561371ead39a..7210f4375279 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1447,10 +1447,19 @@ static void node_states_clear_node(int node, struct memory_notify *arg) > node_clear_state(node, N_MEMORY); > } > > +static int count_system_ram_pages_cb(unsigned long start_pfn, > + unsigned long nr_pages, void *data) > +{ > + unsigned long *nr_system_ram_pages = data; > + > + *nr_system_ram_pages += nr_pages; > + return 0; > +} > + > static int __ref __offline_pages(unsigned long start_pfn, > unsigned long end_pfn) > { > - unsigned long pfn, nr_pages; > + unsigned long pfn, nr_pages = 0; > unsigned long offlined_pages = 0; > int ret, node, nr_isolate_pageblock; > unsigned long flags; > @@ -1461,6 +1470,20 @@ static int __ref __offline_pages(unsigned long start_pfn, > > mem_hotplug_begin(); > > + /* > + * We don't allow to offline memory blocks that contain holes > + * and consecuently don't allow to online memory blocks that contain > + * holes. This allows to simplify the code quite a lot and we don't > + * have to mess with PG_reserved pages for memory holes. > + */ > + walk_system_ram_range(start_pfn, end_pfn - start_pfn, &nr_pages, > + count_system_ram_pages_cb); > + if (nr_pages != end_pfn - start_pfn) { > + ret = -EINVAL; > + reason = "memory holes"; > + goto failed_removal; > + } > + > /* This makes hotplug much easier...and readable. > we assume this for now. .*/ > if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, > @@ -1472,7 +1495,6 @@ static int __ref __offline_pages(unsigned long start_pfn, > > zone = page_zone(pfn_to_page(valid_start)); > node = zone_to_nid(zone); > - nr_pages = end_pfn - start_pfn; > > /* set above range as isolated */ > ret = start_isolate_page_range(start_pfn, end_pfn, >