From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E7DBC43381 for ; Wed, 20 Mar 2019 20:57:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 60162218CD for ; Wed, 20 Mar 2019 20:57:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="hQHtw54b" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727599AbfCTU5g (ORCPT ); Wed, 20 Mar 2019 16:57:36 -0400 Received: from mail-ot1-f67.google.com ([209.85.210.67]:42955 "EHLO mail-ot1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726006AbfCTU5g (ORCPT ); Wed, 20 Mar 2019 16:57:36 -0400 Received: by mail-ot1-f67.google.com with SMTP id 103so3485289otd.9 for ; Wed, 20 Mar 2019 13:57:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ij6DJHAqfryzjiSG/jxMhd1XKTG9GJ9L0fy61wU1Z5Q=; b=hQHtw54bMsNq0UWUzqZi4Xepmmij8Jy+KDZGJeywwLNrPiqUxNyKDH9xKURLPj0fcE kgWIoCRrhrHXSjdOwmwJflndw0Hkzfn+dxWAhGaTjcdCUmKbGDvtLdps5Q5aNmZbyXWA 3qpCDijWEutH+VhB1H3M03XG0EqvBDiHqQf8KpW7PktJ/3m/DRFeEcos1F+gSTr3sIrB Ad673In69dbKt/dX6+IzUFlmsJQdelglLg+znKplUjCEwy5y+QlPtMSJmrjZ01yzykjv KzrcWmFeOoSmQMh+oZa/9G1ooH7YeUyBLCBcskKekprWtQU1d7nwTXUeH25dBf7R+wDa oxlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ij6DJHAqfryzjiSG/jxMhd1XKTG9GJ9L0fy61wU1Z5Q=; b=FjRaFzEqBWRlxIhjji4dxiDcpK/2cF/0TL3II/dwsTn3C2gmpzmO/gARrwhjw/LOxn WrimpRJKU1dyTvRyMz4CiZ3w99V22OEAQZCbgFNQpTbiGCe+fjB3kRNmVgS7eYRTbLZj 5ioS18s11duuHyNCIGnUebcQKNv1mZYxgqCf/25gka2qx7UiY1ruZQ+N3wPR336Z/7ix kFvfS7dwkET1AKisgZT9dnj0ANHeNnmbtJogsl+WmEq8N5XhyhJEzTCuJTIB4XEFH71h mMfespHTE0QDDR+LZK1fAXkgPoSrL2z8/k8vk32sx9hNiQvFhFOi9IkCFQ8Zu0NRxX9Z 9E7g== X-Gm-Message-State: APjAAAU74P6qiQlBxo4IC4OasHlMaXzstNCiniHjV79PYmpBWh0mury4 3o0zPmN1HTxGVvbPiTRi4XRlqjQw0soIFAQ8IEBxxDrj X-Google-Smtp-Source: APXvYqxcZXKMA+p2n96n1FFTaOSelIGH59iBkrJ6znV79aqILXhBWYIui9s4FGcIsTHT0hrIr2022s1bnFyAt264+ko= X-Received: by 2002:a9d:4d0b:: with SMTP id n11mr63266otf.98.1553115455713; Wed, 20 Mar 2019 13:57:35 -0700 (PDT) MIME-Version: 1.0 References: <20190228083522.8189-1-aneesh.kumar@linux.ibm.com> <20190228083522.8189-2-aneesh.kumar@linux.ibm.com> <87k1hc8iqa.fsf@linux.ibm.com> <871s3aqfup.fsf@linux.ibm.com> <87bm267ywc.fsf@linux.ibm.com> <878sxa7ys5.fsf@linux.ibm.com> In-Reply-To: From: Dan Williams Date: Wed, 20 Mar 2019 13:57:25 -0700 Message-ID: Subject: Re: [PATCH 2/2] mm/dax: Don't enable huge dax mapping by default To: "Aneesh Kumar K.V" Cc: Jan Kara , linux-nvdimm , Michael Ellerman , Linux Kernel Mailing List , Linux MM , Ross Zwisler , Andrew Morton , linuxppc-dev , "Kirill A . Shutemov" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 20, 2019 at 8:34 AM Dan Williams wrote: > > On Wed, Mar 20, 2019 at 1:09 AM Aneesh Kumar K.V > wrote: > > > > Aneesh Kumar K.V writes: > > > > > Dan Williams writes: > > > > > >> > > >>> Now what will be page size used for mapping vmemmap? > > >> > > >> That's up to the architecture's vmemmap_populate() implementation. > > >> > > >>> Architectures > > >>> possibly will use PMD_SIZE mapping if supported for vmemmap. Now a > > >>> device-dax with struct page in the device will have pfn reserve area aligned > > >>> to PAGE_SIZE with the above example? We can't map that using > > >>> PMD_SIZE page size? > > >> > > >> IIUC, that's a different alignment. Currently that's handled by > > >> padding the reservation area up to a section (128MB on x86) boundary, > > >> but I'm working on patches to allow sub-section sized ranges to be > > >> mapped. > > > > > > I am missing something w.r.t code. The below code align that using nd_pfn->align > > > > > > if (nd_pfn->mode == PFN_MODE_PMEM) { > > > unsigned long memmap_size; > > > > > > /* > > > * vmemmap_populate_hugepages() allocates the memmap array in > > > * HPAGE_SIZE chunks. > > > */ > > > memmap_size = ALIGN(64 * npfns, HPAGE_SIZE); > > > offset = ALIGN(start + SZ_8K + memmap_size + dax_label_reserve, > > > nd_pfn->align) - start; > > > } > > > > > > IIUC that is finding the offset where to put vmemmap start. And that has > > > to be aligned to the page size with which we may end up mapping vmemmap > > > area right? > > Right, that's the physical offset of where the vmemmap ends, and the > memory to be mapped begins. > > > > Yes we find the npfns by aligning up using PAGES_PER_SECTION. But that > > > is to compute howmany pfns we should map for this pfn dev right? > > > > > > > Also i guess those 4K assumptions there is wrong? > > Yes, I think to support non-4K-PAGE_SIZE systems the 'pfn' metadata > needs to be revved and the PAGE_SIZE needs to be recorded in the > info-block. How often does a system change page-size. Is it fixed or do environment change it from one boot to the next? I'm thinking through the behavior of what do when the recorded PAGE_SIZE in the info-block does not match the current system page size. The simplest option is to just fail the device and require it to be reconfigured. Is that acceptable?