From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70AFCC2B9F7 for ; Fri, 28 May 2021 13:11:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 47BA26127A for ; Fri, 28 May 2021 13:11:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235839AbhE1NNH (ORCPT ); Fri, 28 May 2021 09:13:07 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:27830 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235297AbhE1NNG (ORCPT ); Fri, 28 May 2021 09:13:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622207491; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FW9/4jT9qjw9qNpmMNMU5+vfx14NmLdS+jsCbfo2ekM=; b=i6h3zNjSkp9JbHxwZD7xaFbiOHLF5Bjt69NeKZ5pL1dLwZMcwpSls+zkZpYpEzicog9Unq Q+u0WXuMkCAE8V999MvvDe5CWIC+4zw5YomSlWOfwuCm7thIm8rwh1k+HMKg4AEWRU6s++ VdbvEOq5EgA5aZQEQbbzAB7WxM9voy0= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-595-bjlAp-_DNvu1VoARdpJHHw-1; Fri, 28 May 2021 09:11:28 -0400 X-MC-Unique: bjlAp-_DNvu1VoARdpJHHw-1 Received: by mail-qt1-f199.google.com with SMTP id b17-20020ac854110000b02901f279c73d75so2141724qtq.2 for ; Fri, 28 May 2021 06:11:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=FW9/4jT9qjw9qNpmMNMU5+vfx14NmLdS+jsCbfo2ekM=; b=sv9NGWC4tyG93clilRTLSIzd4v+l1GwH5DGmaPXjj8hpmMsA2Rab/4G5LDHjYwqdHC qIfZTqoTP9rRRJiAQXm0NikU+MU4urCoUgdf+8f8v66DeeLc2yshrlanVlhVfRboWaPo YlJagWmsjJgCdH5qGEqOolezjK2/23bXd5ftweEIf6zKat/NPM/SGw2el/y3chQVeTqv oHnkfhIIXaGntOnVMh5NfT9PU7zB+Z+RBOe0gq23WiwF9TIA6m2MdM7NgMQc1xYjOVYU VdWWX68RQn3lwluBoKuFjOaMwGNf9KA0H6+qxE/hxM8t/0hJ/68xr8ABB4B+R2iQCdpD Oxew== X-Gm-Message-State: AOAM532ZlXg7ERdtwOPsz8t1LP3p2NK+ULxLXhiDpwAYM0rmy062vYDL 0sMpAwphtrY02gBY6A1NacZo73GUY6MehBDxKe8wQEJTxI/rsdL35C6cx912bUNHcaBKSLcDnng sJKSwXEL4UWQArCpPmrwzOdPW X-Received: by 2002:a05:6214:8f2:: with SMTP id dr18mr3830642qvb.25.1622207487813; Fri, 28 May 2021 06:11:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxeO8jD8WA5p6Lyazpu9mOUIrqsQpzhE1ohGBKpqpJPFBu0QqiibLIgKj2MM0bzxmuiSjk/GA== X-Received: by 2002:a05:6214:8f2:: with SMTP id dr18mr3830596qvb.25.1622207487542; Fri, 28 May 2021 06:11:27 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id t137sm3473526qke.50.2021.05.28.06.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 May 2021 06:11:26 -0700 (PDT) Date: Fri, 28 May 2021 09:11:25 -0400 From: Peter Xu To: Alistair Popple Cc: linux-mm@kvack.org, akpm@linux-foundation.org, nouveau@lists.freedesktop.org, bskeggs@redhat.com, rcampbell@nvidia.com, linux-doc@vger.kernel.org, jhubbard@nvidia.com, bsingharora@gmail.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, jglisse@redhat.com, willy@infradead.org, jgg@nvidia.com, hughd@google.com, Christoph Hellwig Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access Message-ID: References: <20210524132725.12697-1-apopple@nvidia.com> <37725705.JvxlXkkoz5@nvdebian> <2243324.CkbYuGXDfH@nvdebian> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <2243324.CkbYuGXDfH@nvdebian> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 28, 2021 at 11:48:40AM +1000, Alistair Popple wrote: [...] > > > > > + while (page_vma_mapped_walk(&pvmw)) { > > > > > + /* Unexpected PMD-mapped THP? */ > > > > > + VM_BUG_ON_PAGE(!pvmw.pte, page); > > > > > + > > > > > + if (!pte_present(*pvmw.pte)) { > > > > > + ret = false; > > > > > + page_vma_mapped_walk_done(&pvmw); > > > > > + break; > > > > > + } > > > > > + > > > > > + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); > > > > > > > > I see that all pages passed in should be done after FOLL_SPLIT_PMD, so > > > > is > > > > this needed? Or say, should subpage==page always be true? > > > > > > Not always, in the case of a thp there are small ptes which will get > > > device > > > exclusive entries. > > > > FOLL_SPLIT_PMD will first split the huge thp into smaller pages, then do > > follow_page_pte() on them (in follow_pmd_mask): > > > > if (flags & FOLL_SPLIT_PMD) { > > int ret; > > page = pmd_page(*pmd); > > if (is_huge_zero_page(page)) { > > spin_unlock(ptl); > > ret = 0; > > split_huge_pmd(vma, pmd, address); > > if (pmd_trans_unstable(pmd)) > > ret = -EBUSY; > > } else { > > spin_unlock(ptl); > > split_huge_pmd(vma, pmd, address); > > ret = pte_alloc(mm, pmd) ? -ENOMEM : 0; > > } > > > > return ret ? ERR_PTR(ret) : > > follow_page_pte(vma, address, pmd, flags, > > &ctx->pgmap); } > > > > So I thought all pages are small pages? > > The page will remain as a transparent huge page though (at least as I > understand things). FOLL_SPLIT_PMD turns it into a pte mapped thp by splitting > the pmd and creating pte's mapping the subpages but doesn't split the page > itself. For comparison FOLL_SPLIT (which has been removed in v5.13 due to lack > of use) is what would be used to split the page in the above GUP code by > calling split_huge_page() rather than split_huge_pmd(). But shouldn't FOLL_SPLIT_PMD filled in small pfns for each pte? See __split_huge_pmd_locked(): for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) { ... } else { entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); ... } ... set_pte_at(mm, addr, pte, entry); } Then iiuc the coming follow_page_pte() will directly fetch the small pages? -- Peter Xu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B8FC4708C for ; Fri, 28 May 2021 21:24:15 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 221CB613B6 for ; Fri, 28 May 2021 21:24:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 221CB613B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=nouveau-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CA6686E489; Fri, 28 May 2021 21:24:14 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 075EE6F5BC for ; Fri, 28 May 2021 13:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622207491; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FW9/4jT9qjw9qNpmMNMU5+vfx14NmLdS+jsCbfo2ekM=; b=i6h3zNjSkp9JbHxwZD7xaFbiOHLF5Bjt69NeKZ5pL1dLwZMcwpSls+zkZpYpEzicog9Unq Q+u0WXuMkCAE8V999MvvDe5CWIC+4zw5YomSlWOfwuCm7thIm8rwh1k+HMKg4AEWRU6s++ VdbvEOq5EgA5aZQEQbbzAB7WxM9voy0= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-310-HZwMPbKONJa2BJJe6iNmqA-1; Fri, 28 May 2021 09:11:28 -0400 X-MC-Unique: HZwMPbKONJa2BJJe6iNmqA-1 Received: by mail-qk1-f198.google.com with SMTP id d15-20020a05620a136fb02902e9e93c69c8so2959434qkl.23 for ; Fri, 28 May 2021 06:11:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=FW9/4jT9qjw9qNpmMNMU5+vfx14NmLdS+jsCbfo2ekM=; b=pueknSpHOygqBy1t0n8kxTZ2jLsBjhhbSbPDGx9oz0W2/r2ILyMEZRQzV/JOBVBp2l mHnDjJy+rVZmFUa2UjlrM+5SUZhIlNj9dUTdRqGq5iDtTi41M+j366vJUxTbi/S92j20 kOarJ3a6a/HeHRdgif5BU43LuR7LennoBU+p9Kk/puxYWsDPZ0wJprfHArxotnOQJS4R IX8WmDyos+c0AMUN8fe1lzIQ98nyyvYIH4DQ7+pzeNQ9bdYgkavMVajXlmX+63jV9QBi s7cYRY4BlaLnay4hK3botOM5wuA3YengHpmcMPjYPowF4MJmFxj4lZha0pyOLWHDRX7M d2xg== X-Gm-Message-State: AOAM533YjvsWA6TSH9krx7mNj8igmq+qklsuCWNW92CH3AG8jhXBsEvf YgnBMWXjr06F/iB5ItPJTHmVJy7PpZCYyqByB8tXrfjOy07rJMl7v64qgsOnDsKe1TgqQsM1b6q RYVrvvPTavAYyo545vTsmJoZkSw== X-Received: by 2002:a05:6214:8f2:: with SMTP id dr18mr3830629qvb.25.1622207487812; Fri, 28 May 2021 06:11:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxeO8jD8WA5p6Lyazpu9mOUIrqsQpzhE1ohGBKpqpJPFBu0QqiibLIgKj2MM0bzxmuiSjk/GA== X-Received: by 2002:a05:6214:8f2:: with SMTP id dr18mr3830596qvb.25.1622207487542; Fri, 28 May 2021 06:11:27 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id t137sm3473526qke.50.2021.05.28.06.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 May 2021 06:11:26 -0700 (PDT) Date: Fri, 28 May 2021 09:11:25 -0400 From: Peter Xu To: Alistair Popple Message-ID: References: <20210524132725.12697-1-apopple@nvidia.com> <37725705.JvxlXkkoz5@nvdebian> <2243324.CkbYuGXDfH@nvdebian> MIME-Version: 1.0 In-Reply-To: <2243324.CkbYuGXDfH@nvdebian> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-Mailman-Approved-At: Fri, 28 May 2021 21:24:13 +0000 Subject: Re: [Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access X-BeenThere: nouveau@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Nouveau development list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, willy@infradead.org, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, bsingharora@gmail.com, hughd@google.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, linux-mm@kvack.org, bskeggs@redhat.com, jgg@nvidia.com, akpm@linux-foundation.org, Christoph Hellwig Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: nouveau-bounces@lists.freedesktop.org Sender: "Nouveau" On Fri, May 28, 2021 at 11:48:40AM +1000, Alistair Popple wrote: [...] > > > > > + while (page_vma_mapped_walk(&pvmw)) { > > > > > + /* Unexpected PMD-mapped THP? */ > > > > > + VM_BUG_ON_PAGE(!pvmw.pte, page); > > > > > + > > > > > + if (!pte_present(*pvmw.pte)) { > > > > > + ret = false; > > > > > + page_vma_mapped_walk_done(&pvmw); > > > > > + break; > > > > > + } > > > > > + > > > > > + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); > > > > > > > > I see that all pages passed in should be done after FOLL_SPLIT_PMD, so > > > > is > > > > this needed? Or say, should subpage==page always be true? > > > > > > Not always, in the case of a thp there are small ptes which will get > > > device > > > exclusive entries. > > > > FOLL_SPLIT_PMD will first split the huge thp into smaller pages, then do > > follow_page_pte() on them (in follow_pmd_mask): > > > > if (flags & FOLL_SPLIT_PMD) { > > int ret; > > page = pmd_page(*pmd); > > if (is_huge_zero_page(page)) { > > spin_unlock(ptl); > > ret = 0; > > split_huge_pmd(vma, pmd, address); > > if (pmd_trans_unstable(pmd)) > > ret = -EBUSY; > > } else { > > spin_unlock(ptl); > > split_huge_pmd(vma, pmd, address); > > ret = pte_alloc(mm, pmd) ? -ENOMEM : 0; > > } > > > > return ret ? ERR_PTR(ret) : > > follow_page_pte(vma, address, pmd, flags, > > &ctx->pgmap); } > > > > So I thought all pages are small pages? > > The page will remain as a transparent huge page though (at least as I > understand things). FOLL_SPLIT_PMD turns it into a pte mapped thp by splitting > the pmd and creating pte's mapping the subpages but doesn't split the page > itself. For comparison FOLL_SPLIT (which has been removed in v5.13 due to lack > of use) is what would be used to split the page in the above GUP code by > calling split_huge_page() rather than split_huge_pmd(). But shouldn't FOLL_SPLIT_PMD filled in small pfns for each pte? See __split_huge_pmd_locked(): for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) { ... } else { entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); ... } ... set_pte_at(mm, addr, pte, entry); } Then iiuc the coming follow_page_pte() will directly fetch the small pages? -- Peter Xu _______________________________________________ Nouveau mailing list Nouveau@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/nouveau From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B80B4C4708C for ; Fri, 28 May 2021 13:11:35 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 81815613E5 for ; Fri, 28 May 2021 13:11:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 81815613E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E19166F5BF; Fri, 28 May 2021 13:11:34 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 177BD6F5BF for ; Fri, 28 May 2021 13:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622207492; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FW9/4jT9qjw9qNpmMNMU5+vfx14NmLdS+jsCbfo2ekM=; b=irvnPD1t4HV7s6qhDAJfesr+ittldPGWcNqBVJ6rZt8p3+lSeiHVYiBkKhIDHLaPGfnZ1G eOGdbAGiS6A46LqQZonaCE3IO2KyRszRrWg7t6S9CL98LHch0CZps12sS7EZrr4yh3lGR5 oFcLrBWNCwRQBzRT8+sC1bzeSmA214M= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-301-LUWo_he1NQOQj8VAgLcd0w-1; Fri, 28 May 2021 09:11:28 -0400 X-MC-Unique: LUWo_he1NQOQj8VAgLcd0w-1 Received: by mail-qv1-f69.google.com with SMTP id a29-20020a0ca99d0000b02901ec0ad2c871so2681406qvb.0 for ; Fri, 28 May 2021 06:11:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=FW9/4jT9qjw9qNpmMNMU5+vfx14NmLdS+jsCbfo2ekM=; b=YCG//Mf8MBVlaodmS8V8esqebi2TwHBfNL7KUHgiC4M7WwsYLBWIKhcd4hcLRCgctr aK0FXa17ZeUqxNp8jKtbU5BR3tQJMUWP2qjTiU0wdG2Rf1ux19CR/dEnVQEX5MJL83Ac nMBeHdrR4tSQ2aSWzkMx7e8sXpoUezbFRPU9b0deGGeN/kxa5r5lr0804HgvbtYaP7QX o4ebBTOVWg6iLKAmSVgsGLbeQal0s+cNo1LXwaNXCXOQlj5f5M8mOX17AEjROOzclETL CjhUukJFD17wjif6yCw/z4eDyZGVGv1jRYWdg3KdCgnZjGJL7HrQsNOymXbzVMPbj/pL 3zeg== X-Gm-Message-State: AOAM532w31+BBMxtNgB3CRI2MjGAu3bFGc7K9quY/6+5490WuXAs0E/F QjVzx7rFBafCPP9DOriz+eRROYZBzaPNI/+4HaJaCJp+p3J1//LhXLhX+sVj8mMepuS84m86EDI lQTHr+Pz/nIDWh5v4/oM32D6tN1zb X-Received: by 2002:a05:6214:8f2:: with SMTP id dr18mr3830635qvb.25.1622207487813; Fri, 28 May 2021 06:11:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxeO8jD8WA5p6Lyazpu9mOUIrqsQpzhE1ohGBKpqpJPFBu0QqiibLIgKj2MM0bzxmuiSjk/GA== X-Received: by 2002:a05:6214:8f2:: with SMTP id dr18mr3830596qvb.25.1622207487542; Fri, 28 May 2021 06:11:27 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id t137sm3473526qke.50.2021.05.28.06.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 May 2021 06:11:26 -0700 (PDT) Date: Fri, 28 May 2021 09:11:25 -0400 From: Peter Xu To: Alistair Popple Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access Message-ID: References: <20210524132725.12697-1-apopple@nvidia.com> <37725705.JvxlXkkoz5@nvdebian> <2243324.CkbYuGXDfH@nvdebian> MIME-Version: 1.0 In-Reply-To: <2243324.CkbYuGXDfH@nvdebian> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, willy@infradead.org, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, bsingharora@gmail.com, hughd@google.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, linux-mm@kvack.org, jglisse@redhat.com, bskeggs@redhat.com, jgg@nvidia.com, jhubbard@nvidia.com, akpm@linux-foundation.org, Christoph Hellwig Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Fri, May 28, 2021 at 11:48:40AM +1000, Alistair Popple wrote: [...] > > > > > + while (page_vma_mapped_walk(&pvmw)) { > > > > > + /* Unexpected PMD-mapped THP? */ > > > > > + VM_BUG_ON_PAGE(!pvmw.pte, page); > > > > > + > > > > > + if (!pte_present(*pvmw.pte)) { > > > > > + ret = false; > > > > > + page_vma_mapped_walk_done(&pvmw); > > > > > + break; > > > > > + } > > > > > + > > > > > + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); > > > > > > > > I see that all pages passed in should be done after FOLL_SPLIT_PMD, so > > > > is > > > > this needed? Or say, should subpage==page always be true? > > > > > > Not always, in the case of a thp there are small ptes which will get > > > device > > > exclusive entries. > > > > FOLL_SPLIT_PMD will first split the huge thp into smaller pages, then do > > follow_page_pte() on them (in follow_pmd_mask): > > > > if (flags & FOLL_SPLIT_PMD) { > > int ret; > > page = pmd_page(*pmd); > > if (is_huge_zero_page(page)) { > > spin_unlock(ptl); > > ret = 0; > > split_huge_pmd(vma, pmd, address); > > if (pmd_trans_unstable(pmd)) > > ret = -EBUSY; > > } else { > > spin_unlock(ptl); > > split_huge_pmd(vma, pmd, address); > > ret = pte_alloc(mm, pmd) ? -ENOMEM : 0; > > } > > > > return ret ? ERR_PTR(ret) : > > follow_page_pte(vma, address, pmd, flags, > > &ctx->pgmap); } > > > > So I thought all pages are small pages? > > The page will remain as a transparent huge page though (at least as I > understand things). FOLL_SPLIT_PMD turns it into a pte mapped thp by splitting > the pmd and creating pte's mapping the subpages but doesn't split the page > itself. For comparison FOLL_SPLIT (which has been removed in v5.13 due to lack > of use) is what would be used to split the page in the above GUP code by > calling split_huge_page() rather than split_huge_pmd(). But shouldn't FOLL_SPLIT_PMD filled in small pfns for each pte? See __split_huge_pmd_locked(): for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) { ... } else { entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); ... } ... set_pte_at(mm, addr, pte, entry); } Then iiuc the coming follow_page_pte() will directly fetch the small pages? -- Peter Xu