From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69E91C433ED for ; Wed, 28 Apr 2021 16:23:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35C6D613EF for ; Wed, 28 Apr 2021 16:23:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240920AbhD1QY1 (ORCPT ); Wed, 28 Apr 2021 12:24:27 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:21466 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239890AbhD1QY0 (ORCPT ); Wed, 28 Apr 2021 12:24:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619627020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=AH7csi7wwzGg18f+fG+VSfsyVRHxjn8kIlDKQiqjMRA=; b=SJtRZoMizDsNw2nYJ1QacmvtFHDl0tJduhOMNd+ody5+jArjme12reEQaixKDP46T3qDZz U8clKo0WuaGFoOnb3xFzsUsDrgZe6EZQSPN+8IYQZS98lmUeuuceSeO+zelG+GYTSC1uON dF+f3Rxp8MuNOfQa7gBWMSt6zfomuRU= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-355-R83AJ_fbODqgB66fIQg8Jw-1; Wed, 28 Apr 2021 12:23:36 -0400 X-MC-Unique: R83AJ_fbODqgB66fIQg8Jw-1 Received: by mail-qk1-f200.google.com with SMTP id i62-20020a3786410000b02902e4f9ff4af8so2941540qkd.8 for ; Wed, 28 Apr 2021 09:23:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=AH7csi7wwzGg18f+fG+VSfsyVRHxjn8kIlDKQiqjMRA=; b=hR1w6z99WODJYyoAW3a4ZZj9MV54SHdoF1QU36SQcjM0SBUZHOocToWlyhnsJkzMTT 9AmvSuPhCub1ABNPaXRmRmdxvBMDm6MRO9H/aNJOCP4CqB/62iAn25jZIEK8gCyzJban DrZrpzgfi0vl4V2UKfZOXpsh1yzk90yG0qTMQvgXAYzGCxAcNEZ3HG5z52XM87TSCmv3 op3UPgvzD5mKo8mkdIbb/dErtJnLx6wXoh0xATIyZHfXHu+CSHEqClL7kPBYV2r+opC0 DSVE2rAEfBXFDmUUY3y/ML6ZlDDG5wVhZcO5vTbvv19nDnWaioJctCgdzQ7c/h+MdTCY pcKg== X-Gm-Message-State: AOAM533v+05ZL28eN+Cw+9+U4tJ9wCm7EryM0zX0OZvdcAVjEY4/LLyQ f+iUqeHBcckW1HMjC3G0C2VkLFGmRMFhbbknEYsTSepOVpIdFQdeod11zYtEBYxVF00Eb4V7n/a uTJRQ9LOP1cNoHOnB+/xxDl+T X-Received: by 2002:a0c:ec0f:: with SMTP id y15mr14227309qvo.9.1619627015685; Wed, 28 Apr 2021 09:23:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwxbTtNlW1BwVWHBPqrKcGOZYJKpqLvm889MXyWSVHOhrT+49N36LmGWoLkkG+ECXaA21Ximg== X-Received: by 2002:a0c:ec0f:: with SMTP id y15mr14227282qvo.9.1619627015284; Wed, 28 Apr 2021 09:23:35 -0700 (PDT) Received: from xz-x1 (bras-base-toroon474qw-grc-77-184-145-104-227.dsl.bell.ca. [184.145.104.227]) by smtp.gmail.com with ESMTPSA id h8sm333557qtp.47.2021.04.28.09.23.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Apr 2021 09:23:34 -0700 (PDT) Date: Wed, 28 Apr 2021 12:23:32 -0400 From: Peter Xu To: Axel Rasmussen Cc: Hugh Dickins , Alexander Viro , Andrea Arcangeli , Andrew Morton , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing , linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, LKML , linux-kselftest@vger.kernel.org, Linux MM , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Subject: Re: [PATCH v5 06/10] userfaultfd/shmem: modify shmem_mcopy_atomic_pte to use install_pte() Message-ID: <20210428162332.GE6584@xz-x1> References: <20210427225244.4326-1-axelrasmussen@google.com> <20210427225244.4326-7-axelrasmussen@google.com> <20210428155638.GD6584@xz-x1> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 28, 2021 at 08:59:53AM -0700, Axel Rasmussen wrote: > On Wed, Apr 28, 2021 at 8:56 AM Peter Xu wrote: > > > > On Tue, Apr 27, 2021 at 05:58:16PM -0700, Hugh Dickins wrote: > > > On Tue, 27 Apr 2021, Axel Rasmussen wrote: > > > > > > > In a previous commit, we added the mcopy_atomic_install_pte() helper. > > > > This helper does the job of setting up PTEs for an existing page, to map > > > > it into a given VMA. It deals with both the anon and shmem cases, as > > > > well as the shared and private cases. > > > > > > > > In other words, shmem_mcopy_atomic_pte() duplicates a case it already > > > > handles. So, expose it, and let shmem_mcopy_atomic_pte() use it > > > > directly, to reduce code duplication. > > > > > > > > This requires that we refactor shmem_mcopy_atomic_pte() a bit: > > > > > > > > Instead of doing accounting (shmem_recalc_inode() et al) part-way > > > > through the PTE setup, do it afterward. This frees up > > > > mcopy_atomic_install_pte() from having to care about this accounting, > > > > and means we don't need to e.g. shmem_uncharge() in the error path. > > > > > > > > A side effect is this switches shmem_mcopy_atomic_pte() to use > > > > lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add(). > > > > This wrapper does some extra accounting in an exceptional case, if > > > > appropriate, so it's actually the more correct thing to use. > > > > > > > > Signed-off-by: Axel Rasmussen > > > > > > Not quite. Two things. > > > > > > One, in this version, delete_from_page_cache(page) has vanished > > > from the particular error path which needs it. > > > > Agreed. I also spotted that the set_page_dirty() seems to have been overlooked > > when reusing mcopy_atomic_install_pte(), which afaiu should be move into the > > helper. > > I think this is covered: we explicitly call SetPageDirty() just before > returning in shmem_mcopy_atomic_pte(). If I remember correctly from a > couple of revisions ago, we consciously put it here instead of in the > helper because it resulted in simpler code (error handling in > particular, I think?), and not all callers of the new helper need it. Indeed, yes that looks okay. > > > > > > > > > Two, and I think this predates your changes (so needs a separate > > > fix patch first, for backport to stable? a user with bad intentions > > > might be able to trigger the BUG), in pondering the new error paths > > > and that /* don't free the page */ one in particular, isn't it the > > > case that the shmem_inode_acct_block() on entry might succeed the > > > first time, but atomic copy fail so -ENOENT, then something else > > > fill up the tmpfs before the retry comes in, so that retry then > > > fail with -ENOMEM, and hit the BUG_ON(page) in __mcopy_atomic()? > > > > > > (As I understand it, the shmem_inode_unacct_blocks() has to be > > > done before returning, because the caller may be unable to retry.) > > > > > > What the right fix is rather depends on other uses of __mcopy_atomic(): > > > if they obviously cannot hit that BUG_ON(page), you may prefer to leave > > > it in, and fix it here where shmem_inode_acct_block() fails. Or you may > > > prefer instead to delete that "else BUG_ON(page);" - looks as if that > > > would end up doing the right thing. Peter may have a preference. > > > > To me, the BUG_ON(page) wanted to guarantee mfill_atomic_pte() should have > > consumed the page properly when possible. Removing the BUG_ON() looks good > > already, it will just stop covering the case when e.g. ret==0. > > > > So maybe slightly better to release the page when shmem_inode_acct_block() > > fails (so as to still keep some guard on the page)? > > This second issue, I will take some more time to investigate. :) No worry - take your time. :) -- Peter Xu