All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: linux-mm@kvack.org
Cc: Dan Williams <dan.j.williams@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	Naoya Horiguchi <naoya.horiguchi@nec.com>,
	Matthew Wilcox <willy@infradead.org>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	John Hubbard <jhubbard@nvidia.com>,
	Jane Chu <jane.chu@oracle.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Jonathan Corbet <corbet@lwn.net>, Christoph Hellwig <hch@lst.de>,
	nvdimm@lists.linux.dev, linux-doc@vger.kernel.org
Subject: Re: [PATCH v6 09/10] device-dax: set mapping prior to vmf_insert_pfn{,_pmd,pud}()
Date: Thu, 25 Nov 2021 11:42:22 +0000	[thread overview]
Message-ID: <0439eb48-1688-a4f4-5feb-8eb2680d652f@oracle.com> (raw)
In-Reply-To: <20211124191005.20783-10-joao.m.martins@oracle.com>

On 11/24/21 19:10, Joao Martins wrote:
> Normally, the @page mapping is set prior to inserting the page into a
> page table entry. Make device-dax adhere to the same ordering, rather
> than setting mapping after the PTE is inserted.
> 
> The address_space never changes and it is always associated with the
> same inode and underlying pages. So, the page mapping is set once but
> cleared when the struct pages are removed/freed (i.e. after
> {devm_}memunmap_pages()).
> 
> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> ---
>  drivers/dax/device.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/dax/device.c b/drivers/dax/device.c
> index 9c87927d4bc2..0ef9fecec005 100644
> --- a/drivers/dax/device.c
> +++ b/drivers/dax/device.c
> @@ -121,6 +121,8 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax,
>  
>  	*pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
>  
> +	dax_set_mapping(vmf, *pfn, fault_size);
> +
>  	return vmf_insert_mixed(vmf->vma, vmf->address, *pfn);
>  }
>  
> @@ -161,6 +163,8 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
>  
>  	*pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
>  
> +	dax_set_mapping(vmf, *pfn, fault_size);
> +
>  	return vmf_insert_pfn_pmd(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
>  }
>  
> @@ -203,6 +207,8 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
>  
>  	*pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
>  
> +	dax_set_mapping(vmf, *pfn, fault_size);
> +
>  	return vmf_insert_pfn_pud(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
>  }
>  #else
> @@ -245,8 +251,6 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
>  		rc = VM_FAULT_SIGBUS;
>  	}
>  
> -	if (rc == VM_FAULT_NOPAGE)
> -		dax_set_mapping(vmf, pfn, fault_size);
>  	dax_read_unlock(id);
>  
>  	return rc;
> 
This last chunk is going to spoof out a new warning because @fault_size in
dev_dax_huge_fault stops being used after this patch.
I've added below chunk for the next version (in addition to Christoph comments in
patch 4):

@@ -217,7 +223,6 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
                enum page_entry_size pe_size)
 {
        struct file *filp = vmf->vma->vm_file;
-       unsigned long fault_size;
        vm_fault_t rc = VM_FAULT_SIGBUS;
        int id;
        pfn_t pfn;
@@ -230,23 +235,18 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
        id = dax_read_lock();
        switch (pe_size) {
        case PE_SIZE_PTE:
-               fault_size = PAGE_SIZE;
                rc = __dev_dax_pte_fault(dev_dax, vmf, &pfn);
                break;
        case PE_SIZE_PMD:
-               fault_size = PMD_SIZE;
                rc = __dev_dax_pmd_fault(dev_dax, vmf, &pfn);
                break;
        case PE_SIZE_PUD:
-               fault_size = PUD_SIZE;
                rc = __dev_dax_pud_fault(dev_dax, vmf, &pfn);
                break;
        default:
                rc = VM_FAULT_SIGBUS;
        }

  reply	other threads:[~2021-11-25 11:42 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-24 19:09 [PATCH v6 00/10] mm, device-dax: Introduce compound pages in devmap Joao Martins
2021-11-24 19:09 ` [PATCH v6 01/10] memory-failure: fetch compound_head after pgmap_pfn_valid() Joao Martins
2021-11-24 19:09 ` [PATCH v6 02/10] mm/page_alloc: split prep_compound_page into head and tail subparts Joao Martins
2021-11-24 19:09 ` [PATCH v6 03/10] mm/page_alloc: refactor memmap_init_zone_device() page init Joao Martins
2021-11-24 19:09 ` [PATCH v6 04/10] mm/memremap: add ZONE_DEVICE support for compound pages Joao Martins
2021-11-25  6:11   ` Christoph Hellwig
2021-11-25 11:35     ` Joao Martins
2021-11-24 19:10 ` [PATCH v6 05/10] device-dax: use ALIGN() for determining pgoff Joao Martins
2021-11-24 19:10 ` [PATCH v6 06/10] device-dax: use struct_size() Joao Martins
2021-11-24 19:10 ` [PATCH v6 07/10] device-dax: ensure dev_dax->pgmap is valid for dynamic devices Joao Martins
2021-11-24 19:10 ` [PATCH v6 08/10] device-dax: factor out page mapping initialization Joao Martins
2021-11-24 19:10 ` [PATCH v6 09/10] device-dax: set mapping prior to vmf_insert_pfn{,_pmd,pud}() Joao Martins
2021-11-25 11:42   ` Joao Martins [this message]
2021-11-26 18:39     ` Joao Martins
2021-11-29  7:32       ` Christoph Hellwig
2021-11-29 15:49         ` Joao Martins
2021-11-29 16:48           ` Christoph Hellwig
2021-11-29 17:20           ` Joao Martins
2021-11-25 18:02   ` [PATCH v6 09/10] device-dax: set mapping prior to vmf_insert_pfn{, _pmd, pud}() kernel test robot
2021-11-24 19:10 ` [PATCH v6 10/10] device-dax: compound devmap support Joao Martins
2021-11-24 22:30 ` [PATCH v6 00/10] mm, device-dax: Introduce compound pages in devmap Dan Williams
2021-11-24 22:41   ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0439eb48-1688-a4f4-5feb-8eb2680d652f@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=hch@lst.de \
    --cc=jane.chu@oracle.com \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=songmuchun@bytedance.com \
    --cc=vishal.l.verma@intel.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.