From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f71.google.com (mail-wm0-f71.google.com [74.125.82.71]) by kanga.kvack.org (Postfix) with ESMTP id E72966B05F9 for ; Mon, 31 Jul 2017 08:55:59 -0400 (EDT) Received: by mail-wm0-f71.google.com with SMTP id e204so19419288wma.2 for ; Mon, 31 Jul 2017 05:55:59 -0700 (PDT) Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id m191si472517wmb.231.2017.07.31.05.55.58 for (version=TLS1 cipher=AES128-SHA bits=128/128); Mon, 31 Jul 2017 05:55:59 -0700 (PDT) Date: Mon, 31 Jul 2017 14:55:56 +0200 From: Michal Hocko Subject: Re: [RFC PATCH 2/5] mm, arch: unify vmemmap_populate altmap handling Message-ID: <20170731125555.GB4829@dhcp22.suse.cz> References: <20170726083333.17754-1-mhocko@kernel.org> <20170726083333.17754-3-mhocko@kernel.org> <20170731144053.38c8b012@thinkpad> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170731144053.38c8b012@thinkpad> Sender: owner-linux-mm@kvack.org List-ID: To: Gerald Schaefer Cc: linux-mm@kvack.org, Andrew Morton , Mel Gorman , Vlastimil Babka , Andrea Arcangeli , Jerome Glisse , Reza Arbab , Yasuaki Ishimatsu , qiuxishi@huawei.com, Kani Toshimitsu , slaoub@gmail.com, Joonsoo Kim , Andi Kleen , Daniel Kiper , Igor Mammedov , Vitaly Kuznetsov , LKML , Benjamin Herrenschmidt , Catalin Marinas , Fenghua Yu , Heiko Carstens , "H. Peter Anvin" , Ingo Molnar , Martin Schwidefsky , Michael Ellerman , Paul Mackerras , Thomas Gleixner , Tony Luck , Will Deacon On Mon 31-07-17 14:40:53, Gerald Schaefer wrote: [...] > > @@ -247,12 +248,12 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node) > > * use large frames even if they are only partially > > * used. > > * Otherwise we would have also page tables since > > - * vmemmap_populate gets called for each section > > + * __vmemmap_populate gets called for each section > > * separately. */ > > if (MACHINE_HAS_EDAT1) { > > void *new_page; > > > > - new_page = vmemmap_alloc_block(PMD_SIZE, node); > > + new_page = __vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); > > if (!new_page) > > goto out; > > pmd_val(*pm_dir) = __pa(new_page) | sgt_prot; > > There is another call to vmemmap_alloc_block() in this function, a couple > of lines below, this should also be replaced by __vmemmap_alloc_block_buf(). I've noticed that one but in general I have only transformed PMD mappings because we shouldn't even get to pte level if the forme works AFAICS. Memory sections should be always 2MB aligned unless I am missing something. Or is this not true? -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org