From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1F95C433E2 for ; Fri, 11 Sep 2020 14:51:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59F8E206E6 for ; Fri, 11 Sep 2020 14:51:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="xLUZ6/Ud" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59F8E206E6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A1E0F6B0002; Fri, 11 Sep 2020 10:51:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CF806B005A; Fri, 11 Sep 2020 10:51:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C7266B005C; Fri, 11 Sep 2020 10:51:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 7679C6B0002 for ; Fri, 11 Sep 2020 10:51:42 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DB683364D for ; Fri, 11 Sep 2020 14:51:41 +0000 (UTC) X-FDA: 77251069602.05.jewel61_190e7dc270ef Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 2A65918034AD9 for ; Fri, 11 Sep 2020 14:51:40 +0000 (UTC) X-HE-Tag: jewel61_190e7dc270ef X-Filterd-Recvd-Size: 6626 Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Sep 2020 14:51:39 +0000 (UTC) Received: by mail-lf1-f66.google.com with SMTP id y2so6104402lfy.10 for ; Fri, 11 Sep 2020 07:51:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Es61CdFs2ziil98bYBCSIFGJ3Lulkt94HhIOJVUXfYs=; b=xLUZ6/UdJS2dhTO9yUTkvqHxUEC4FtIEpbxbtAqypyVQyG/MuFqmpA5Gv1yJzZXARJ ygB+N1sAggg0QibTDFA8TbdOEy9Ir3mtf/WDMZsl7YG6qH108ExxLt5gDI98J/HerGLu 2c7DTR1WI+W50x7C2FgGLw/7CYQcG2EmTpCtCC5Ft0qYQtsTDKyPOR2xC1HfMGVJ5PMa Q3anLtNzIXuAT8soUNyFEL9ID+OU0aH0JTr04jkKQMOx0rTvMA0kjG6H+j/Iin/z5z/8 t1E3kHqnsCFpFaZSeP3o1XaPUSt2SdMtQBIds3wICk0MPvMT59m6zDooqsl2z5fZNHNA C/Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Es61CdFs2ziil98bYBCSIFGJ3Lulkt94HhIOJVUXfYs=; b=R1H4dZ1XJmWwI9G2EGwD51L9/T+hmbjEaKo4P7OsnNZGZLQDpDq3hkHdVihQM6Ar0O nd/e2cqtyNJUI+3uhF73aylEYOpXqkZRPElovk1pyzs4I8fAeSTJIfA/TPLDCHm2+nM3 NnIMKDXH/isMCDylJsiiioqimTK4yHhtkuQg5m4P4pm2ZGnsUuJsPHUxWBKNSOah4leV QP8AL1g5KNBDn4XO+6dudtbA/yIrU5QJrqyYy83Qv2hIAOoPKjBGYrJbIfgQKvnCH119 9OsPhUaqklFQ0pQ5FIykN6hz+sjTb5wy+5VDLZwuf6AXO5kcoiDvW2rVAIa2wFK4ztLE jb5g== X-Gm-Message-State: AOAM531zT9SUSIYC0BUF2t/yPlgcMBf4hjL/D4UGBDO7/FeFKRO7vlW3 gNx3J8S4FBjfeD73rNKw23AjeQ== X-Google-Smtp-Source: ABdhPJyNi0yifGi14VlMZTV8ngwrQ0E2kXODKg6kPjKJMqMQrM0XQo12GhwC/kagMeLxZU/12ZGxHA== X-Received: by 2002:a19:c788:: with SMTP id x130mr360442lff.553.1599835897459; Fri, 11 Sep 2020 07:51:37 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id 77sm108013lfg.199.2020.09.11.07.51.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Sep 2020 07:51:36 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 3E2681021C6; Fri, 11 Sep 2020 17:51:43 +0300 (+03) Date: Fri, 11 Sep 2020 17:51:43 +0300 From: "Kirill A. Shutemov" To: Matthew Wilcox Cc: linux-mm@kvack.org, Andrew Morton , Huang Ying Subject: Re: [PATCH 02/11] mm/memory: Remove page fault assumption of compound page size Message-ID: <20200911145143.chqbu4vl575puodw@box> References: <20200908195539.25896-1-willy@infradead.org> <20200908195539.25896-3-willy@infradead.org> <20200909142904.acca6gthbffk3jwq@box> <20200909145035.GH6583@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200909145035.GH6583@casper.infradead.org> X-Rspamd-Queue-Id: 2A65918034AD9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 09, 2020 at 03:50:35PM +0100, Matthew Wilcox wrote: > On Wed, Sep 09, 2020 at 05:29:04PM +0300, Kirill A. Shutemov wrote: > > On Tue, Sep 08, 2020 at 08:55:29PM +0100, Matthew Wilcox (Oracle) wrote: > > > A compound page in the page cache will not necessarily be of PMD size, > > > so check explicitly. > > > > > > Signed-off-by: Matthew Wilcox (Oracle) > > > --- > > > mm/memory.c | 7 ++++--- > > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > index 602f4283122f..4b35b4e71e64 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -3562,13 +3562,14 @@ static vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) > > > unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > > > pmd_t entry; > > > int i; > > > - vm_fault_t ret; > > > + vm_fault_t ret = VM_FAULT_FALLBACK; > > > > > > if (!transhuge_vma_suitable(vma, haddr)) > > > - return VM_FAULT_FALLBACK; > > > + return ret; > > > > > > - ret = VM_FAULT_FALLBACK; > > > page = compound_head(page); > > > + if (page_order(page) != HPAGE_PMD_ORDER) > > > + return ret; > > > > Maybe also VM_BUG_ON_PAGE(page_order(page) > HPAGE_PMD_ORDER, page)? > > Just in case. > > In the patch where I actually start creating THPs, I limit the order to > HPAGE_PMD_ORDER, so we're not going to see this today. At some point > in the future, I can imagine that we allow THPs larger than PMD size, > and what we'd want alloc_set_pte() to look like is: > > if (pud_none(*vmf->pud) && PageTransCompound(page)) { > ret = do_set_pud(vmf, page); > if (ret != VM_FAULT_FALLBACK) > return ret; > } > if (pmd_none(*vmf->pmd) && PageTransCompound(page)) { > ret = do_set_pmd(vmf, page); > if (ret != VM_FAULT_FALLBACK) > return ret; > } > > Once we're in that situation, in do_set_pmd(), we'd want to figure out > which sub-page of the >PMD-sized page to insert. But I don't want to > write code for that now. > > So, what's the right approach if somebody does call alloc_set_pte() > with a >PMD sized page? It's not exported, so the only two ways to get > it called with a >PMD sized page is to (1) persuade filemap_map_pages() > to call it, which means putting it in the page cache or (2) return it > from vm_ops->fault. If someone actually does that (an interesting > device driver, perhaps), I don't think hitting it with a BUG is the > right response. I think it should actually be to map the right PMD-sized > chunk of the page, but we don't even do that today -- we map the first > PMD-sized chunk of the page. > > With this patch, we'll simply map the appropriate PAGE_SIZE chunk at the > requested address. So this would be a bugfix for such a demented driver. > At some point, it'd be nice to handle this with a PMD, but I don't want > to write that code without a test-case. We could probably simulate > it with the page cache THP code and be super-aggressive about creating > order-10 pages ... but this is feeling more and more out of scope for > this patch set, which today hit 99 patches. Okay, fair enough. VM_BUG is too strong reaction here as we can make a reasonable fallback. Maybe WARN_ON_ONCE() would make sense? It would indicate the place that has to be adjust once we would get abouve PMD-order pages. -- Kirill A. Shutemov