From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D01AC4708A for ; Wed, 26 May 2021 19:50:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30E7E613D7 for ; Wed, 26 May 2021 19:50:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234489AbhEZTvo (ORCPT ); Wed, 26 May 2021 15:51:44 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:41934 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232868AbhEZTvm (ORCPT ); Wed, 26 May 2021 15:51:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622058610; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=s1/6uItuxoYyE47GHbWu3zlHXfIxxg+Z3u434C7Z3XM=; b=IoxGINoY/TdQEbrwfLFwUS/tT2cIoRg5hRU56gJhHaBxR6B6AQWi3GwYEQxX32giQ8qZbn lelhRCnkTqQhrHCAXNNAoNuceuIDaNWfv2jjjnou+wH4NcnN47CiBeGrSeroj/xTVMkfS8 8Tw7wPXvXSEOCmk1e7kMgzTo6O/YHVE= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-334-bO1dKjBaMKar0dp4adxjKg-1; Wed, 26 May 2021 15:50:08 -0400 X-MC-Unique: bO1dKjBaMKar0dp4adxjKg-1 Received: by mail-qv1-f69.google.com with SMTP id h11-20020a0ceecb0000b0290211ed54e716so2129713qvs.9 for ; Wed, 26 May 2021 12:50:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=s1/6uItuxoYyE47GHbWu3zlHXfIxxg+Z3u434C7Z3XM=; b=siNGf9KFMfmqe5Tn22KKrBJWvr4Ohitj8VTTYtP4Nw+KAUEgipIo7uMwcx1vf7b+Ul gO5aLDkzFnmfivLhZewKQs2c6aVkbXc9fRAEtOgtX3Bvh2yF85JWLzo3HwkNspe14O3E 4yzXJzjoyX3gFYNWL549ngk04HSfq36NK5RtT6rwO+qWHDwop+AYN5InQu77mZKPk/lJ pDHNSR3EoppnJDpOngWQSF2N/Cwmp4cwSB/R0TniPWvWRkhkC2Xjx7bwCJpo/Hh1P66d 391EmwfuEkCvbUfKYQtcWTqz5O00besKlndlOam40/5QIm2hj7qrkOiK/JqH3tbvGhu7 rOzw== X-Gm-Message-State: AOAM532wCFmZwwGs1TNo/SwkzrpSHpsKjqMZRdTXrUcZu4KF5Jw1ZqaO BRaW9bcarY1aePKW9SdecGyq9bHesvKJ4GV1sUGQQx0cnWX66y2OeMqs0LcEHZUq5aZBm6ZbNHp Pme/88yQrzgCTUxZQnxm/F/pA X-Received: by 2002:a37:7046:: with SMTP id l67mr43118522qkc.69.1622058607640; Wed, 26 May 2021 12:50:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyDF9KrF9YeP9fDUebqEr4gYyhgflPp+V4B+WrCnnvB/WRyfJdS1hsUee111LHB9PSeDuPW8w== X-Received: by 2002:a37:7046:: with SMTP id l67mr43118500qkc.69.1622058607342; Wed, 26 May 2021 12:50:07 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id g18sm2200678qke.37.2021.05.26.12.50.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 May 2021 12:50:06 -0700 (PDT) Date: Wed, 26 May 2021 15:50:05 -0400 From: Peter Xu To: Alistair Popple Cc: linux-mm@kvack.org, akpm@linux-foundation.org, nouveau@lists.freedesktop.org, bskeggs@redhat.com, rcampbell@nvidia.com, linux-doc@vger.kernel.org, jhubbard@nvidia.com, bsingharora@gmail.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, jglisse@redhat.com, willy@infradead.org, jgg@nvidia.com, hughd@google.com Subject: Re: [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Message-ID: References: <20210524132725.12697-1-apopple@nvidia.com> <20210524132725.12697-7-apopple@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210524132725.12697-7-apopple@nvidia.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 24, 2021 at 11:27:21PM +1000, Alistair Popple wrote: > Currently if copy_nonpresent_pte() returns a non-zero value it is > assumed to be a swap entry which requires further processing outside the > loop in copy_pte_range() after dropping locks. This prevents other > values being returned to signal conditions such as failure which a > subsequent change requires. > > Instead make copy_nonpresent_pte() return an error code if further > processing is required and read the value for the swap entry in the main > loop under the ptl. > > Signed-off-by: Alistair Popple > > --- > > v9: > > New for v9 to allow device exclusive handling to occur in > copy_nonpresent_pte(). > --- > mm/memory.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 2fb455c365c2..e061cfa18c11 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > if (likely(!non_swap_entry(entry))) { > if (swap_duplicate(entry) < 0) > - return entry.val; > + return -EAGAIN; > > /* make sure dst_mm is on swapoff's mmlist. */ > if (unlikely(list_empty(&dst_mm->mmlist))) { > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > continue; > } > if (unlikely(!pte_present(*src_pte))) { > - entry.val = copy_nonpresent_pte(dst_mm, src_mm, > - dst_pte, src_pte, > - src_vma, addr, rss); > - if (entry.val) > + ret = copy_nonpresent_pte(dst_mm, src_mm, > + dst_pte, src_pte, > + src_vma, addr, rss); > + if (ret == -EAGAIN) { > + entry = pte_to_swp_entry(*src_pte); > break; > + } > progress += 8; > continue; > } Note that -EAGAIN was previously used by copy_present_page() for early cow use. Here later although we check entry.val first: if (entry.val) { if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { ret = -ENOMEM; goto out; } entry.val = 0; } else if (ret) { WARN_ON_ONCE(ret != -EAGAIN); prealloc = page_copy_prealloc(src_mm, src_vma, addr); if (!prealloc) return -ENOMEM; /* We've captured and resolved the error. Reset, try again. */ ret = 0; } We didn't reset "ret" in entry.val case (maybe we should?). Then in the next round of "goto again" if "ret" is unluckily untouched, it could reach the 2nd if check, and I think it could cause an unexpected page_copy_prealloc(). -- Peter Xu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 643AAC47088 for ; Wed, 26 May 2021 22:00:06 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 18532613D7 for ; Wed, 26 May 2021 22:00:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 18532613D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=nouveau-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AAA1B6EDD7; Wed, 26 May 2021 22:00:02 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 949256EC6F for ; Wed, 26 May 2021 19:50:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622058611; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=s1/6uItuxoYyE47GHbWu3zlHXfIxxg+Z3u434C7Z3XM=; b=fmIJoNXz5tZqdXqryJA5YXtBaNyYLaP6E/waXa1I+cTq2IAf3P/F6pUp2rBiy2RjZlMa6y Q7vjVZi5eTJSwJm1YuVHNgDx8H93Ye2E1LR8c6/5xk//gYQVZD/sAke5rZrmNFaz+pPAUL jlKCOz9XIRLsmuyhl6Y/519nZ8cAgaw= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-78-Jc37NRNPMhKPL-zJXtI3ug-1; Wed, 26 May 2021 15:50:08 -0400 X-MC-Unique: Jc37NRNPMhKPL-zJXtI3ug-1 Received: by mail-qv1-f71.google.com with SMTP id h10-20020a0cab0a0000b029020282c64ecfso2101682qvb.19 for ; Wed, 26 May 2021 12:50:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=s1/6uItuxoYyE47GHbWu3zlHXfIxxg+Z3u434C7Z3XM=; b=aM1m6xcAjo2sl7mkck9FFaUsv4WOUJPsWvh4RGrJkT1YojoKiFQJnYkaP/H3ixibWF PnDz4CXOjgBeenZ+OYdDvUzie4M7cJ6Sfrf7f7+HdTotKnTbkw9n9OslaPpbBzzLd8yQ Z/R4YehLswWbdHu6Fqy1YnnhfFCh5BvnSlsBRfbva/9kc5VgXlV52ooo9eNOOhc/G+Qa vGke2gxoiUlgBNyo9yS5ONuzXCG9cFmgxdCW3jr3fspfjkp2y2NJpLXXMAeQ+Z8EFKSu awS9CANnZNMpCWhf79JGeGHAbLrOWw4ToLAte3y+e376TLn+j1poUUZewp4DBieyXvpv U64w== X-Gm-Message-State: AOAM5334e1mYokVrlZMO5YjqFs6f9QuXZVIvz6XmwS2q+T/8Mh6MO8a7 MnnSWuFd552rJt0qBiC97o1HYtaMEByDzzccBkfUB3M76Im6I7iCZdP8u4jY7aTqfbiwOFrOsvv 5p4QEFuy97hvn3SUc2vEVoNB2fA== X-Received: by 2002:a37:7046:: with SMTP id l67mr43118537qkc.69.1622058607663; Wed, 26 May 2021 12:50:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyDF9KrF9YeP9fDUebqEr4gYyhgflPp+V4B+WrCnnvB/WRyfJdS1hsUee111LHB9PSeDuPW8w== X-Received: by 2002:a37:7046:: with SMTP id l67mr43118500qkc.69.1622058607342; Wed, 26 May 2021 12:50:07 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id g18sm2200678qke.37.2021.05.26.12.50.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 May 2021 12:50:06 -0700 (PDT) Date: Wed, 26 May 2021 15:50:05 -0400 From: Peter Xu To: Alistair Popple Message-ID: References: <20210524132725.12697-1-apopple@nvidia.com> <20210524132725.12697-7-apopple@nvidia.com> MIME-Version: 1.0 In-Reply-To: <20210524132725.12697-7-apopple@nvidia.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-Mailman-Approved-At: Wed, 26 May 2021 22:00:01 +0000 Subject: Re: [Nouveau] [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() X-BeenThere: nouveau@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Nouveau development list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, willy@infradead.org, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, bsingharora@gmail.com, hughd@google.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, linux-mm@kvack.org, bskeggs@redhat.com, jgg@nvidia.com, akpm@linux-foundation.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: nouveau-bounces@lists.freedesktop.org Sender: "Nouveau" On Mon, May 24, 2021 at 11:27:21PM +1000, Alistair Popple wrote: > Currently if copy_nonpresent_pte() returns a non-zero value it is > assumed to be a swap entry which requires further processing outside the > loop in copy_pte_range() after dropping locks. This prevents other > values being returned to signal conditions such as failure which a > subsequent change requires. > > Instead make copy_nonpresent_pte() return an error code if further > processing is required and read the value for the swap entry in the main > loop under the ptl. > > Signed-off-by: Alistair Popple > > --- > > v9: > > New for v9 to allow device exclusive handling to occur in > copy_nonpresent_pte(). > --- > mm/memory.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 2fb455c365c2..e061cfa18c11 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > if (likely(!non_swap_entry(entry))) { > if (swap_duplicate(entry) < 0) > - return entry.val; > + return -EAGAIN; > > /* make sure dst_mm is on swapoff's mmlist. */ > if (unlikely(list_empty(&dst_mm->mmlist))) { > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > continue; > } > if (unlikely(!pte_present(*src_pte))) { > - entry.val = copy_nonpresent_pte(dst_mm, src_mm, > - dst_pte, src_pte, > - src_vma, addr, rss); > - if (entry.val) > + ret = copy_nonpresent_pte(dst_mm, src_mm, > + dst_pte, src_pte, > + src_vma, addr, rss); > + if (ret == -EAGAIN) { > + entry = pte_to_swp_entry(*src_pte); > break; > + } > progress += 8; > continue; > } Note that -EAGAIN was previously used by copy_present_page() for early cow use. Here later although we check entry.val first: if (entry.val) { if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { ret = -ENOMEM; goto out; } entry.val = 0; } else if (ret) { WARN_ON_ONCE(ret != -EAGAIN); prealloc = page_copy_prealloc(src_mm, src_vma, addr); if (!prealloc) return -ENOMEM; /* We've captured and resolved the error. Reset, try again. */ ret = 0; } We didn't reset "ret" in entry.val case (maybe we should?). Then in the next round of "goto again" if "ret" is unluckily untouched, it could reach the 2nd if check, and I think it could cause an unexpected page_copy_prealloc(). -- Peter Xu _______________________________________________ Nouveau mailing list Nouveau@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/nouveau From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C404C47082 for ; Wed, 26 May 2021 19:50:13 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C749D613D7 for ; Wed, 26 May 2021 19:50:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C749D613D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 460696EC52; Wed, 26 May 2021 19:50:12 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id A1C406EC52 for ; Wed, 26 May 2021 19:50:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622058609; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=s1/6uItuxoYyE47GHbWu3zlHXfIxxg+Z3u434C7Z3XM=; b=an7oAYoyjJ8QtC6JdXQuL3f8TWz2oeZubWMnF3vqsQqqp0cvXVaj8J+jvNoIPdhdmibArq Ry9tfnq7tyArefIUepaIX8HA0lAyU7oWEq6GcJ8pJcLTvgXpgSktqD+h6MrGcPSeMqqwk9 xPsK3mDNg3hKbMxHKMLTc7b7C37YARI= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-485-wsUIBu6zMUKVo5SnxZyu7w-1; Wed, 26 May 2021 15:50:08 -0400 X-MC-Unique: wsUIBu6zMUKVo5SnxZyu7w-1 Received: by mail-qv1-f71.google.com with SMTP id b5-20020a0cc9850000b02901eece87073bso2089849qvk.21 for ; Wed, 26 May 2021 12:50:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=s1/6uItuxoYyE47GHbWu3zlHXfIxxg+Z3u434C7Z3XM=; b=Ntu1eK+o+sTUqRnBU0yFesMhR4x5YRwtizqPJ3eP7n2UNE+4DPNGICWBKvLligRKV9 ITDMOM4Uf4D7IEZyBf8B6oOdXzOGmZr8jXkvXyi5lIDXQuniE1YMZdWTePAccj3tRb/8 qN8Dd29hV2zftrk1CImVuDPrJjnVVZFtzdhbbehN49vuJBzTVPGskUKhBvz8ta9MFXHX H+Sl1IknRCuI+JGxqWTLNmMiXWSlYdYBNxFSIavHu+gCKyZnJWlD2goGi+ChPQJ8+wM/ kX1kqedxum6BbB+8wzstGcIBwwQ22MX/GmuZz/UFkBtAXxIxqnZWuiCer8Xw8QzwkZuE +32w== X-Gm-Message-State: AOAM530pwelLpR9w0bLXmbJ2E8U8BONec4ZlXxK0y8dhS5pcGewCPi4N Xg1z2YsfJXzVvGSsYyT3/h+y6vx19fdOO1bkzsmIi6rkpFIctlXcY5Ly0PHZlSW1jshP95BYeri skyIlZvTxhzZUNXXx6ODcbqtUiPat X-Received: by 2002:a37:7046:: with SMTP id l67mr43118535qkc.69.1622058607644; Wed, 26 May 2021 12:50:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyDF9KrF9YeP9fDUebqEr4gYyhgflPp+V4B+WrCnnvB/WRyfJdS1hsUee111LHB9PSeDuPW8w== X-Received: by 2002:a37:7046:: with SMTP id l67mr43118500qkc.69.1622058607342; Wed, 26 May 2021 12:50:07 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id g18sm2200678qke.37.2021.05.26.12.50.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 May 2021 12:50:06 -0700 (PDT) Date: Wed, 26 May 2021 15:50:05 -0400 From: Peter Xu To: Alistair Popple Subject: Re: [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Message-ID: References: <20210524132725.12697-1-apopple@nvidia.com> <20210524132725.12697-7-apopple@nvidia.com> MIME-Version: 1.0 In-Reply-To: <20210524132725.12697-7-apopple@nvidia.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, willy@infradead.org, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, bsingharora@gmail.com, hughd@google.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, linux-mm@kvack.org, jglisse@redhat.com, bskeggs@redhat.com, jgg@nvidia.com, jhubbard@nvidia.com, akpm@linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Mon, May 24, 2021 at 11:27:21PM +1000, Alistair Popple wrote: > Currently if copy_nonpresent_pte() returns a non-zero value it is > assumed to be a swap entry which requires further processing outside the > loop in copy_pte_range() after dropping locks. This prevents other > values being returned to signal conditions such as failure which a > subsequent change requires. > > Instead make copy_nonpresent_pte() return an error code if further > processing is required and read the value for the swap entry in the main > loop under the ptl. > > Signed-off-by: Alistair Popple > > --- > > v9: > > New for v9 to allow device exclusive handling to occur in > copy_nonpresent_pte(). > --- > mm/memory.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 2fb455c365c2..e061cfa18c11 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > if (likely(!non_swap_entry(entry))) { > if (swap_duplicate(entry) < 0) > - return entry.val; > + return -EAGAIN; > > /* make sure dst_mm is on swapoff's mmlist. */ > if (unlikely(list_empty(&dst_mm->mmlist))) { > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > continue; > } > if (unlikely(!pte_present(*src_pte))) { > - entry.val = copy_nonpresent_pte(dst_mm, src_mm, > - dst_pte, src_pte, > - src_vma, addr, rss); > - if (entry.val) > + ret = copy_nonpresent_pte(dst_mm, src_mm, > + dst_pte, src_pte, > + src_vma, addr, rss); > + if (ret == -EAGAIN) { > + entry = pte_to_swp_entry(*src_pte); > break; > + } > progress += 8; > continue; > } Note that -EAGAIN was previously used by copy_present_page() for early cow use. Here later although we check entry.val first: if (entry.val) { if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { ret = -ENOMEM; goto out; } entry.val = 0; } else if (ret) { WARN_ON_ONCE(ret != -EAGAIN); prealloc = page_copy_prealloc(src_mm, src_vma, addr); if (!prealloc) return -ENOMEM; /* We've captured and resolved the error. Reset, try again. */ ret = 0; } We didn't reset "ret" in entry.val case (maybe we should?). Then in the next round of "goto again" if "ret" is unluckily untouched, it could reach the 2nd if check, and I think it could cause an unexpected page_copy_prealloc(). -- Peter Xu