From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi1-f196.google.com ([209.85.167.196]:37311 "EHLO mail-oi1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725790AbeLAHPu (ORCPT ); Sat, 1 Dec 2018 02:15:50 -0500 Received: by mail-oi1-f196.google.com with SMTP id y23so5792630oia.4 for ; Fri, 30 Nov 2018 12:05:25 -0800 (PST) MIME-Version: 1.0 References: <154353682674.1676897.15440708268545845062.stgit@dwillia2-desk3.amr.corp.intel.com> <20181130154902.GL10377@bombadil.infradead.org> <20181130162435.GM10377@bombadil.infradead.org> <20181130195021.GN10377@bombadil.infradead.org> In-Reply-To: <20181130195021.GN10377@bombadil.infradead.org> From: Dan Williams Date: Fri, 30 Nov 2018 12:05:13 -0800 Message-ID: Subject: Re: [PATCH] dax: Fix Xarray conversion of dax_unlock_mapping_entry() To: Matthew Wilcox Cc: linux-nvdimm , Jan Kara , linux-fsdevel , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Fri, Nov 30, 2018 at 11:50 AM Matthew Wilcox wrote: > > On Fri, Nov 30, 2018 at 09:01:07AM -0800, Dan Williams wrote: > > On Fri, Nov 30, 2018 at 8:33 AM Dan Williams wrote: > > > > > > On Fri, Nov 30, 2018 at 8:24 AM Matthew Wilcox wrote: > > > > > > > > On Fri, Nov 30, 2018 at 07:54:49AM -0800, Dan Williams wrote: > > > > > Looks good to me, although can we make that cookie an actual type? I > > > > > think it's mostly ok to pass around (void *) for 'entry' inside of > > > > > fs/dax.c, but once an entry leaves that file I'd like it to have an > > > > > explicit type to catch people that might accidentally pass a (struct > > > > > page *) to the unlock routine. > > > > > > > > That's a really good idea. Something like this? > > > > > > > > typedef struct { > > > > void *v; > > > > } dax_entry_t; > > > > > > Yes, please. > > Oh. The caller needs to interpret it to see if the entry was successfully > locked, so it needs to be an integer type (or we need a comparison > function ... bleh). > > > > > I could see us making good use of that within dax.c. > > > > I'm now thinking that this is a nice improvement for 4.21. For 4.20-rc > > lets do the localized fix. > > I think both patches are equally risky. I admit this patch crosses a > file boundary, but the other patch changes dax_make_entry() which is > used by several functions which aren't part of this path, whereas this > patch only changes functions used in the path which is known to be buggy. I'm almost buying this argument... but I'd feel better knowing that all dax_make_entry() usages are safe against this bug pattern. I didn't audit the other occurrences of dax_make_entry() for this bug, did you build some confidence here? > > This patch has the advantage of getting us closer to where we want to be > sooner. One comment below... > > > From 1135b8d08f23ab4f5b28261535a817f3de9297c9 Mon Sep 17 00:00:00 2001 > From: Matthew Wilcox > Date: Fri, 30 Nov 2018 11:05:06 -0500 > Subject: [PATCH] dax: Change lock/unlock API > > Return the unlock cookie from dax_lock_mapping_entry() and > pass it to dax_unlock_mapping_entry(). This fixes a bug where > dax_unlock_mapping_entry() was assuming that the page was PMD-aligned > if the entry was a PMD entry. > > Debugged-by: Dan Williams > Fixes: 9f32d221301c ("dax: Convert dax_lock_mapping_entry to XArray") > Signed-off-by: Matthew Wilcox > --- > fs/dax.c | 21 ++++++++------------- > include/linux/dax.h | 15 +++++++++------ > mm/memory-failure.c | 6 ++++-- > 3 files changed, 21 insertions(+), 21 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 9bcce89ea18e..d2c04e802978 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -351,20 +351,20 @@ static struct page *dax_busy_page(void *entry) > * @page: The page whose entry we want to lock > * > * Context: Process context. > - * Return: %true if the entry was locked or does not need to be locked. > + * Return: A cookie to pass to dax_unlock_mapping_entry() or 0 if the > + * entry could not be locked. > */ > -bool dax_lock_mapping_entry(struct page *page) > +dax_entry_t dax_lock_mapping_entry(struct page *page) > { > XA_STATE(xas, NULL, 0); > void *entry; > - bool locked; > > /* Ensure page->mapping isn't freed while we look at it */ > rcu_read_lock(); > for (;;) { > struct address_space *mapping = READ_ONCE(page->mapping); > > - locked = false; > + entry = NULL; > if (!dax_mapping(mapping)) > break; > > @@ -375,7 +375,7 @@ bool dax_lock_mapping_entry(struct page *page) > * otherwise we would not have a valid pfn_to_page() > * translation. > */ > - locked = true; > + entry = (void *)~0UL; > if (S_ISCHR(mapping->host->i_mode)) > break; > > @@ -400,23 +400,18 @@ bool dax_lock_mapping_entry(struct page *page) > break; > } > rcu_read_unlock(); > - return locked; > + return (dax_entry_t)entry; > } > > -void dax_unlock_mapping_entry(struct page *page) > +void dax_unlock_mapping_entry(struct page *page, dax_entry_t entry) Let's not require the page to be passed back, it can be derived: page = pfn_to_page(dax_to_pfn((void*) entry)); A bit more symmetric that way and canonical with other locking schemes that return a cookie.