From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C2B8C433E1 for ; Wed, 24 Jun 2020 02:14:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3883920874 for ; Wed, 24 Jun 2020 02:14:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WNt4KQNy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3883920874 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 941986B0002; Tue, 23 Jun 2020 22:14:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F10D6B0003; Tue, 23 Jun 2020 22:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 806A16B0005; Tue, 23 Jun 2020 22:14:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 6929F6B0002 for ; Tue, 23 Jun 2020 22:14:16 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 054912C6D for ; Wed, 24 Jun 2020 02:14:16 +0000 (UTC) X-FDA: 76962485712.24.crowd21_071061526e40 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id D90171A4A5 for ; Wed, 24 Jun 2020 02:14:15 +0000 (UTC) X-HE-Tag: crowd21_071061526e40 X-Filterd-Recvd-Size: 5451 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 02:14:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592964854; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=72tXMCYAWaryjLEebqft3+tOxDV8hdqHpypK5kqi+nM=; b=WNt4KQNyWole5ANB4639dfeKw6AmcRHSbCuYE1GjCNR31f1Ev56leczpbOB+PktzH0Ac7I XLPFB/7OSgcG5Mwd0zLKGuE3EFfLTBRF5lW3XsLZDSn2cS15ZWEr+GZyFNFehQrPCJmo0A JySQa06jQLTK8I9pNAfDMu0CWIh0NsY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-325-cLOg42qFM8SspIKJJo__wQ-1; Tue, 23 Jun 2020 22:14:10 -0400 X-MC-Unique: cLOg42qFM8SspIKJJo__wQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8BE1257094; Wed, 24 Jun 2020 02:14:09 +0000 (UTC) Received: from localhost (ovpn-12-31.pek2.redhat.com [10.72.12.31]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 369521DC; Wed, 24 Jun 2020 02:14:05 +0000 (UTC) Date: Wed, 24 Jun 2020 10:14:03 +0800 From: Baoquan He To: Dan Williams Cc: Wei Yang , Andrew Morton , Oscar Salvador , Linux MM , Linux Kernel Mailing List , David Hildenbrand Subject: Re: [PATCH] mm/spase: never partially remove memmap for early section Message-ID: <20200624021403.GH3346@MiWiFi-R3L-srv> References: <20200623094258.6705-1-richard.weiyang@linux.alibaba.com> <20200624014737.GG3346@MiWiFi-R3L-srv> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200624014737.GG3346@MiWiFi-R3L-srv> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Rspamd-Queue-Id: D90171A4A5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 06/24/20 at 09:47am, Baoquan He wrote: > On 06/23/20 at 05:21pm, Dan Williams wrote: > > On Tue, Jun 23, 2020 at 2:43 AM Wei Yang > > wrote: > > > > > > For early sections, we assumes its memmap will never be partially > > > removed. But current behavior breaks this. > > > > Where do we assume that? > > > > The primary use case for this was mapping pmem that collides with > > System-RAM in the same 128MB section. That collision will certainly be > > depopulated on-demand depending on the state of the pmem device. So, > > I'm not understanding the problem or the benefit of this change. > > I was also confused when review this patch, the patch log is a little > short and simple. From the current code, with SPARSE_VMEMMAP enabled, we > do build memmap for the whole memory section during boot, even though > some of them may be partially populated. We just mark the subsection map > for present pages. > > Later, if pmem device is mapped into the partially boot memory section, > we just fill the relevant subsection map, do return directly, w/o building > the memmap for it, in section_activate(). Because the memmap for the > unpresent RAM part have been there. I guess this is what Wei is trying to > do to keep the behaviour be consistent for pmem device adding, or OK, from Wei's reply I realized this patch is a necessary fix. If we depoluate the partial memmap for pmem removing, the later pmem re-adding won't have a valid memmap. > pmem device removing and later adding again. > > Please correct me if I am wrong. > > To me, fixing it looks good. But a clear doc or code comment is > necessary so that people can understand the code with less time. > Leaving it as is doesn't cause harm. I personally tend to choose > the former. > > paging_init() > ->sparse_init() > ->sparse_init_nid() > { > ... > for_each_present_section_nr(pnum_begin, pnum) { > ... > map = __populate_section_memmap(pfn, PAGES_PER_SECTION, > nid, NULL); > ... > } > } > ... > ->zone_sizes_init() > ->free_area_init() > { > for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { > subsection_map_init(start_pfn, end_pfn - start_pfn); > } > { > > __add_pages() > ->sparse_add_section() > ->section_activate() > { > ... > fill_subsection_map(); > if (nr_pages < PAGES_PER_SECTION && early_section(ms)) <----------********* > return pfn_to_page(pfn); > ... > } > > > >