From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E64B1C4332F for ; Thu, 3 Nov 2022 17:35:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231346AbiKCRfX (ORCPT ); Thu, 3 Nov 2022 13:35:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229694AbiKCRfU (ORCPT ); Thu, 3 Nov 2022 13:35:20 -0400 Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com [IPv6:2a00:1450:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 571791055C for ; Thu, 3 Nov 2022 10:35:19 -0700 (PDT) Received: by mail-lf1-x130.google.com with SMTP id b2so4040735lfp.6 for ; Thu, 03 Nov 2022 10:35:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=nbc5sLo4GtFM3+CrjPxJa3FUz65TVuj3idceDshPhx4=; b=Gn5IZG7BOHVOBTuMOBcjNXxPjeolIRNHRVpauV2nqG9N5UUKknJaArW6gDsufzjd0L gVL0m43iclxR+zBnj9MMlyKToMT+/mimOdJyztrBuRcSy7sxCwBS9lS0wATTWIORIjGW XvaDl8tm8xEdFZca6VSXMudYB/kz0siWXxNy/UEuwYTCqEPaDk/wKBFVr3oNcQPoSIqh 0rmI0KUq0E4QXyAhmYIde2LLM4evQ2C2oAnx/26L4RNDXvrSZ/8WJ6gTAPXrZbe1ay9T Bmt5oEvcukzbI4Rezy4U5PTu3bubW4clu2rHJRwG8oCYCNzwn5SBHTyFCzchnMHyaCdU Q73g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nbc5sLo4GtFM3+CrjPxJa3FUz65TVuj3idceDshPhx4=; b=Sx+JUnpST3cidvQ/4KPwpIRxRhyuCyEpNPCxU83OrEoccSf2NH61BqutlcVrxcOUXg qvpf3/HdtQVNG5yAggIb90kAKrwXzgn1haJiB9Pf6KfB7fTgbMLzyVDqTQCYoWlrxoDk N8GgCOVMgrjmmIUFBEQ9XQQg9txr+GToU/0HmvYuXlW+v6c2btsoF9gMp+WOAHdB5/X7 lCwwlWO2KRlDIokobFv+4gLybLRkHKCKVZMh02esJZGf0SD/XyhQYsCXXT/gow9WqhYq ZoA21jV1IZUe/7Pg0wjfCXGZzlwP5aiZxclPq8V1VJ6i7ipdnUHuezJfYnIHf9g3TbrF hgsg== X-Gm-Message-State: ACrzQf3yxVifm7x1fsWg23UBAlRVsHqDUCzDDzJ5c7KvgSoLx32moSbE ivBsfgK5UVZrU9sgMV8MEGSBoafXoOw1b2Mv8y6Y3Q== X-Google-Smtp-Source: AMsMyM7EJNOVEGyOCeEHgo74cDzxE4T1yFulU/v18cx9x7m6GcDlFRNGIcw4cn8WGmS4u1u9Xxkff71gyckQBcqL+L4= X-Received: by 2002:a05:6512:110f:b0:4a2:697f:c39a with SMTP id l15-20020a056512110f00b004a2697fc39amr11456204lfg.685.1667496916076; Thu, 03 Nov 2022 10:35:16 -0700 (PDT) MIME-Version: 1.0 References: <20221101175326.13265-1-vishal.moola@gmail.com> <20221101175326.13265-4-vishal.moola@gmail.com> In-Reply-To: From: Axel Rasmussen Date: Thu, 3 Nov 2022 10:34:38 -0700 Message-ID: Subject: Re: [PATCH 3/5] userfualtfd: Replace lru_cache functions with folio_add functions To: Peter Xu Cc: Matthew Wilcox , "Vishal Moola (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, Hugh Dickins Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Nov 2, 2022 at 1:44 PM Peter Xu wrote: > > On Wed, Nov 02, 2022 at 07:21:19PM +0000, Matthew Wilcox wrote: > > On Wed, Nov 02, 2022 at 03:02:35PM -0400, Peter Xu wrote: > > > Does the patch attached look reasonable to you? > > > > Mmm, no. If the page is in the swap cache, this will be "true". > > It will not happen in practise, right? > > I mean, shmem_get_folio() should have done the swap-in, and we should have > the page lock held at the meantime. > > For anon, mcopy_atomic_pte() is the only user and it's passing in a newly > allocated page here. > > > > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > > index 3d0fef3980b3..650ab6cfd5f4 100644 > > > --- a/mm/userfaultfd.c > > > +++ b/mm/userfaultfd.c > > > @@ -64,7 +64,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > > pte_t _dst_pte, *dst_pte; > > > bool writable = dst_vma->vm_flags & VM_WRITE; > > > bool vm_shared = dst_vma->vm_flags & VM_SHARED; > > > - bool page_in_cache = page->mapping; > > > + bool page_in_cache = page_mapping(page); > > > > We could do: > > > > struct page *head = compound_head(page); > > bool page_in_cache = head->mapping && !PageMappingFlags(head); > > Sounds good to me, but it just gets a bit complicated. > > If page_mapping() doesn't sound good, how about we just pass that over from > callers? We only have three, so quite doable too. For what it's worth, I think I like Matthew's version better than the original patch. This is because, although page_mapping() looks simpler here, looking into the definition of page_mapping() I feel it's handling several cases, not all of which are relevant here (or, as Matthew points out, would actually be wrong if it were possible to reach those cases here). It's not clear to me what is meant by "pass that over from callers"? Do you mean, have callers pass in true/false for page_in_cache directly? That could work, but I still think I prefer Matthew's version slightly better, if only because this function already takes a lot of arguments. > > -- > Peter Xu >