From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00F85C06511 for ; Mon, 1 Jul 2019 12:48:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CF174214AE for ; Mon, 1 Jul 2019 12:48:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561985293; bh=D5jVZsNj74URsN4TzBrITEsg4gfhmIuwpM+EURulVgU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=0dwQsSU5hF+GqaVGCAMoC+wimQUkOZCSaWXzB7ruE/+P2CHa1z4Jhkp38xKy1ta3W aUoTL6ZO6HTXsqw37t6QD9EIt1QJzjnLttTNFIxFa+J3A/bvk5MbA84fWdfUIiV8eQ Gq2mgzki0xMnMcJEnrSWTGDXj1g6IL3h+Lx49N9k= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728212AbfGAMsM (ORCPT ); Mon, 1 Jul 2019 08:48:12 -0400 Received: from mx2.suse.de ([195.135.220.15]:51130 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727479AbfGAMsM (ORCPT ); Mon, 1 Jul 2019 08:48:12 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 80FCEAF2C; Mon, 1 Jul 2019 12:48:10 +0000 (UTC) Date: Mon, 1 Jul 2019 14:48:09 +0200 From: Michal Hocko To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-arm-kernel@lists.infradead.org, akpm@linux-foundation.org, Dan Williams , Wei Yang , Igor Mammedov , Catalin Marinas , Will Deacon , Mark Rutland , Ard Biesheuvel , Chintan Pandya , Mike Rapoport , Jun Yao , Yu Zhao , Robin Murphy , Anshuman Khandual Subject: Re: [PATCH v3 04/11] arm64/mm: Add temporary arch_remove_memory() implementation Message-ID: <20190701124809.GV6376@dhcp22.suse.cz> References: <20190527111152.16324-1-david@redhat.com> <20190527111152.16324-5-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190527111152.16324-5-david@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 27-05-19 13:11:45, David Hildenbrand wrote: > A proper arch_remove_memory() implementation is on its way, which also > cleanly removes page tables in arch_add_memory() in case something goes > wrong. > > As we want to use arch_remove_memory() in case something goes wrong > during memory hotplug after arch_add_memory() finished, let's add > a temporary hack that is sufficient enough until we get a proper > implementation that cleans up page table entries. > > We will remove CONFIG_MEMORY_HOTREMOVE around this code in follow up > patches. I would drop this one as well (like s390 counterpart). > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Mark Rutland > Cc: Andrew Morton > Cc: Ard Biesheuvel > Cc: Chintan Pandya > Cc: Mike Rapoport > Cc: Jun Yao > Cc: Yu Zhao > Cc: Robin Murphy > Cc: Anshuman Khandual > Signed-off-by: David Hildenbrand > --- > arch/arm64/mm/mmu.c | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index a1bfc4413982..e569a543c384 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1084,4 +1084,23 @@ int arch_add_memory(int nid, u64 start, u64 size, > return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, > restrictions); > } > +#ifdef CONFIG_MEMORY_HOTREMOVE > +void arch_remove_memory(int nid, u64 start, u64 size, > + struct vmem_altmap *altmap) > +{ > + unsigned long start_pfn = start >> PAGE_SHIFT; > + unsigned long nr_pages = size >> PAGE_SHIFT; > + struct zone *zone; > + > + /* > + * FIXME: Cleanup page tables (also in arch_add_memory() in case > + * adding fails). Until then, this function should only be used > + * during memory hotplug (adding memory), not for memory > + * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be > + * unlocked yet. > + */ > + zone = page_zone(pfn_to_page(start_pfn)); > + __remove_pages(zone, start_pfn, nr_pages, altmap); > +} > +#endif > #endif > -- > 2.20.1 -- Michal Hocko SUSE Labs