linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Matthew Wilcox <willy@infradead.org>
Cc: Qian Cai <cai@lca.pw>, Huang Ying <ying.huang@intel.com>,
	linux-mm@kvack.org
Subject: Re: page cache: Store only head pages in i_pages
Date: Mon, 1 Apr 2019 12:27:16 +0300	[thread overview]
Message-ID: <20190401092716.mxw32y4sl66ywc2o@kshutemo-mobl1> (raw)
In-Reply-To: <20190401091858.s7clitbvf46nomjm@kshutemo-mobl1>

On Mon, Apr 01, 2019 at 12:18:58PM +0300, Kirill A. Shutemov wrote:
> On Sat, Mar 30, 2019 at 08:23:26PM -0700, Matthew Wilcox wrote:
> > On Sat, Mar 30, 2019 at 07:10:52AM -0700, Matthew Wilcox wrote:
> > > On Fri, Mar 29, 2019 at 08:04:32PM -0700, Matthew Wilcox wrote:
> > > > Excellent!  I'm not comfortable with the rule that you have to be holding
> > > > the i_pages lock in order to call find_get_page() on a swap address_space.
> > > > How does this look to the various smart people who know far more about the
> > > > MM than I do?
> > > > 
> > > > The idea is to ensure that if this race does happen, the page will be
> > > > handled the same way as a pagecache page.  If __delete_from_swap_cache()
> > > > can be called while the page is still part of a VMA, then this patch
> > > > will break page_to_pgoff().  But I don't think that can happen ... ?
> > > 
> > > Oh, blah, that can totally happen.  reuse_swap_page() calls
> > > delete_from_swap_cache().  Need a new plan.
> > 
> > I don't see a good solution here that doesn't involve withdrawing this
> > patch and starting over.  Bad solutions:
> > 
> >  - Take the i_pages lock around each page lookup call in the swap code
> >    (not just the one you found; there are others like mc_handle_swap_pte()
> >    in memcontrol.c)
> >  - Call synchronize_rcu() in __delete_from_swap_cache()
> >  - Swap the roles of ->index and ->private for swap pages, and then don't
> >    clear ->index when deleting a page from the swap cache
> > 
> > The first two would be slow and non-scalable.  The third is still prone
> > to a race where the page is looked up on one CPU, while another CPU
> > removes it from one swap file then moves it to a different location,
> > potentially in a different swap file.  Hard to hit, but not a race we
> > want to introduce.
> > 
> > I believe that the swap code actually never wants to see subpages.  So if
> > we start again, introducing APIs (eg find_get_head()) which return the
> > head page, then convert the swap code over to use those APIs, we don't
> > need to solve the problem of finding the subpage of a swap page while
> > not holding the page lock.
> > 
> > I'm obviously reluctant to withdraw the patch, but I don't see a better
> > option.  Your testing has revealed a problem that needs a deeper solution
> > than just adding a fix patch.
> 
> Hm. Isn't the problem with VM_BUGs themself? I mean find_subpage()
> produces right result (or am I wrong here?), but VM_BUGs flags it as wrong.

Yeah, I'm wrong. :P

What about patch like this? (completely untested)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index f939e004c5d1..e3b9bf843dcb 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -335,12 +335,12 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
 
 static inline struct page *find_subpage(struct page *page, pgoff_t offset)
 {
-	unsigned long index = page_index(page);
+	unsigned long mask;
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
-	VM_BUG_ON_PAGE(index > offset, page);
-	VM_BUG_ON_PAGE(index + (1 << compound_order(page)) <= offset, page);
-	return page - index + offset;
+
+	mask = (1UL << compound_order(page)) - 1;
+	return page + (offset & mask);
 }
 
 struct page *find_get_entry(struct address_space *mapping, pgoff_t offset);
-- 
 Kirill A. Shutemov


  reply	other threads:[~2019-04-01  9:27 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1553285568.26196.24.camel@lca.pw>
2019-03-23  3:38 ` page cache: Store only head pages in i_pages Matthew Wilcox
2019-03-23 23:50   ` Qian Cai
2019-03-24  2:06     ` Matthew Wilcox
2019-03-24  2:52       ` Qian Cai
2019-03-24  3:04         ` Matthew Wilcox
2019-03-24 15:42           ` Qian Cai
2019-03-27 10:48           ` William Kucharski
2019-03-27 11:50             ` Matthew Wilcox
2019-03-29  1:43           ` Qian Cai
2019-03-29 19:59             ` Matthew Wilcox
2019-03-29 21:25               ` Qian Cai
2019-03-30  3:04                 ` Matthew Wilcox
2019-03-30 14:10                   ` Matthew Wilcox
2019-03-31  3:23                     ` Matthew Wilcox
2019-04-01  9:18                       ` Kirill A. Shutemov
2019-04-01  9:27                         ` Kirill A. Shutemov [this message]
2019-04-04 13:10                           ` Qian Cai
2019-04-04 13:45                             ` Kirill A. Shutemov
2019-04-04 21:28                               ` Qian Cai
2019-04-05 13:37                                 ` Kirill A. Shutemov
2019-04-05 13:51                                   ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190401092716.mxw32y4sl66ywc2o@kshutemo-mobl1 \
    --to=kirill@shutemov.name \
    --cc=cai@lca.pw \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).