From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69D1ACA9EC5 for ; Wed, 30 Oct 2019 13:30:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3541120856 for ; Wed, 30 Oct 2019 13:30:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3541120856 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C47636B0003; Wed, 30 Oct 2019 09:30:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF7966B0006; Wed, 30 Oct 2019 09:30:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE68D6B0007; Wed, 30 Oct 2019 09:30:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id 8F0D96B0003 for ; Wed, 30 Oct 2019 09:30:01 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 2BD29181AF5C7 for ; Wed, 30 Oct 2019 13:30:01 +0000 (UTC) X-FDA: 76100534202.12.body15_1d374ccb72607 X-HE-Tag: body15_1d374ccb72607 X-Filterd-Recvd-Size: 3278 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Oct 2019 13:30:00 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 4E72FB351; Wed, 30 Oct 2019 13:29:59 +0000 (UTC) Date: Wed, 30 Oct 2019 14:29:58 +0100 From: Michal Hocko To: Vincent Whitchurch Cc: akpm@linux-foundation.org, osalvador@suse.de, pasha.tatashin@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vincent Whitchurch Subject: Re: [PATCH] mm/sparse: Consistently do not zero memmap Message-ID: <20191030132958.GD31513@dhcp22.suse.cz> References: <20191030131122.8256-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191030131122.8256-1-vincent.whitchurch@axis.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 30-10-19 14:11:22, Vincent Whitchurch wrote: > sparsemem without VMEMMAP has two allocation paths to allocate the > memory needed for its memmap (done in sparse_mem_map_populate()). > > In one allocation path (sparse_buffer_alloc() succeeds), the memory is > not zeroed (since it was previously allocated with > memblock_alloc_try_nid_raw()). > > In the other allocation path (sparse_buffer_alloc() fails and > sparse_mem_map_populate() falls back to memblock_alloc_try_nid()), the > memory is zeroed. > > AFAICS this difference does not appear to be on purpose. If the code is > supposed to work with non-initialized memory (__init_single_page() takes > care of zeroing the struct pages which are actually used), we should > consistently not zero the memory, to avoid masking bugs. You are right that this is not intentional. > (I noticed this because on my ARM64 platform, with 1 GiB of memory the > first [and only] section is allocated from the zeroing path while with > 2 GiB of memory the first 1 GiB section is allocated from the > non-zeroing path.) Do I get it right that sparse_buffer_init couldn't allocate memmap for the full node for some reason and so sparse_init_nid would have to allocate one for each memory section? > Signed-off-by: Vincent Whitchurch Anyway the patch is OK. Even though this is not a bug strictly speaking it is certainly a suboptimal behavior because zeroying takes time so I would flag this for a stable tree 4.19+. There is no clear Fixes tag to apply (35fd1eb1e8212 would get closest I guess). Acked-by: Michal Hocko > --- > mm/sparse.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/sparse.c b/mm/sparse.c > index f6891c1992b1..01e467adc219 100644 > --- a/mm/sparse.c > +++ b/mm/sparse.c > @@ -458,7 +458,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn, > if (map) > return map; > > - map = memblock_alloc_try_nid(size, > + map = memblock_alloc_try_nid_raw(size, > PAGE_SIZE, addr, > MEMBLOCK_ALLOC_ACCESSIBLE, nid); > if (!map) > -- > 2.20.0 -- Michal Hocko SUSE Labs