linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joonsoo Kim <js1304@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Alex Shi <alex.shi@linux.alibaba.com>,
	Shakeel Butt <shakeelb@google.com>,
	Hugh Dickins <hughd@google.com>, Michal Hocko <mhocko@suse.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Roman Gushchin <guro@fb.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 05/18] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API
Date: Thu, 23 Apr 2020 14:25:06 +0900	[thread overview]
Message-ID: <20200423052450.GA12538@js1304-desktop> (raw)
In-Reply-To: <20200422120946.GA358439@cmpxchg.org>

On Wed, Apr 22, 2020 at 08:09:46AM -0400, Johannes Weiner wrote:
> On Wed, Apr 22, 2020 at 03:40:41PM +0900, Joonsoo Kim wrote:
> > On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote:
> > > The try/commit/cancel protocol that memcg uses dates back to when
> > > pages used to be uncharged upon removal from the page cache, and thus
> > > couldn't be committed before the insertion had succeeded. Nowadays,
> > > pages are uncharged when they are physically freed; it doesn't matter
> > > whether the insertion was successful or not. For the page cache, the
> > > transaction dance has become unnecessary.
> > > 
> > > Introduce a mem_cgroup_charge() function that simply charges a newly
> > > allocated page to a cgroup and sets up page->mem_cgroup in one single
> > > step. If the insertion fails, the caller doesn't have to do anything
> > > but free/put the page.
> > > 
> > > Then switch the page cache over to this new API.
> > > 
> > > Subsequent patches will also convert anon pages, but it needs a bit
> > > more prep work. Right now, memcg depends on page->mapping being
> > > already set up at the time of charging, so that it can maintain its
> > > own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set
> > > under the same pte lock under which the page is publishd, so a single
> > > charge point that can block doesn't work there just yet.
> > > 
> > > The following prep patches will replace the private memcg counters
> > > with the generic vmstat counters, thus removing the page->mapping
> > > dependency, then complete the transition to the new single-point
> > > charge API and delete the old transactional scheme.
> > > 
> > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > ---
> > >  include/linux/memcontrol.h | 10 ++++
> > >  mm/filemap.c               | 24 ++++------
> > >  mm/memcontrol.c            | 27 +++++++++++
> > >  mm/shmem.c                 | 97 +++++++++++++++++---------------------
> > >  4 files changed, 89 insertions(+), 69 deletions(-)
> > > 
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index c7875a48c8c1..5e8b0e38f145 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -367,6 +367,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm,
> > >  void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
> > >  			      bool lrucare);
> > >  void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg);
> > > +
> > > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask,
> > > +		      bool lrucare);
> > > +
> > >  void mem_cgroup_uncharge(struct page *page);
> > >  void mem_cgroup_uncharge_list(struct list_head *page_list);
> > >  
> > > @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page,
> > >  {
> > >  }
> > >  
> > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > > +				    gfp_t gfp_mask, bool lrucare)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > >  static inline void mem_cgroup_uncharge(struct page *page)
> > >  {
> > >  }
> > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > index 5b31af9d5b1b..5bdbda965177 100644
> > > --- a/mm/filemap.c
> > > +++ b/mm/filemap.c
> > > @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page,
> > >  {
> > >  	XA_STATE(xas, &mapping->i_pages, offset);
> > >  	int huge = PageHuge(page);
> > > -	struct mem_cgroup *memcg;
> > >  	int error;
> > >  	void *old;
> > >  
> > > @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page,
> > >  	VM_BUG_ON_PAGE(PageSwapBacked(page), page);
> > >  	mapping_set_update(&xas, mapping);
> > >  
> > > -	if (!huge) {
> > > -		error = mem_cgroup_try_charge(page, current->mm,
> > > -					      gfp_mask, &memcg);
> > > -		if (error)
> > > -			return error;
> > > -	}
> > > -
> > >  	get_page(page);
> > >  	page->mapping = mapping;
> > >  	page->index = offset;
> > >  
> > > +	if (!huge) {
> > > +		error = mem_cgroup_charge(page, current->mm, gfp_mask, false);
> > > +		if (error)
> > > +			goto error;
> > > +	}
> > > +
> > >  	do {
> > >  		xas_lock_irq(&xas);
> > >  		old = xas_load(&xas);
> > > @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page,
> > >  		xas_unlock_irq(&xas);
> > >  	} while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK));
> > >  
> > > -	if (xas_error(&xas))
> > > +	if (xas_error(&xas)) {
> > > +		error = xas_error(&xas);
> > >  		goto error;
> > > +	}
> > >  
> > > -	if (!huge)
> > > -		mem_cgroup_commit_charge(page, memcg, false);
> > >  	trace_mm_filemap_add_to_page_cache(page);
> > >  	return 0;
> > >  error:
> > >  	page->mapping = NULL;
> > >  	/* Leave page->index set: truncation relies upon it */
> > > -	if (!huge)
> > > -		mem_cgroup_cancel_charge(page, memcg);
> > >  	put_page(page);
> > > -	return xas_error(&xas);
> > > +	return error;
> > >  }
> > >  ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
> > >  
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 711d6dd5cbb1..b38c0a672d26 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -6577,6 +6577,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg)
> > >  	cancel_charge(memcg, nr_pages);
> > >  }
> > >  
> > > +/**
> > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > > + * @page: page to charge
> > > + * @mm: mm context of the victim
> > > + * @gfp_mask: reclaim mode
> > > + * @lrucare: page might be on the LRU already
> > > + *
> > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> > > + * pages according to @gfp_mask if necessary.
> > > + *
> > > + * Returns 0 on success. Otherwise, an error code is returned.
> > > + */
> > > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask,
> > > +		      bool lrucare)
> > > +{
> > > +	struct mem_cgroup *memcg;
> > > +	int ret;
> > > +
> > > +	VM_BUG_ON_PAGE(!page->mapping, page);
> > > +
> > > +	ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg);
> > > +	if (ret)
> > > +		return ret;
> > > +	mem_cgroup_commit_charge(page, memcg, lrucare);
> > > +	return 0;
> > > +}
> > > +
> > >  struct uncharge_gather {
> > >  	struct mem_cgroup *memcg;
> > >  	unsigned long pgpgout;
> > > diff --git a/mm/shmem.c b/mm/shmem.c
> > > index 52c66801321e..2384f6c7ef71 100644
> > > --- a/mm/shmem.c
> > > +++ b/mm/shmem.c
> > > @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo)
> > >   */
> > >  static int shmem_add_to_page_cache(struct page *page,
> > >  				   struct address_space *mapping,
> > > -				   pgoff_t index, void *expected, gfp_t gfp)
> > > +				   pgoff_t index, void *expected, gfp_t gfp,
> > > +				   struct mm_struct *charge_mm)
> > >  {
> > >  	XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page));
> > >  	unsigned long i = 0;
> > >  	unsigned long nr = compound_nr(page);
> > > +	int error;
> > >  
> > >  	VM_BUG_ON_PAGE(PageTail(page), page);
> > >  	VM_BUG_ON_PAGE(index != round_down(index, nr), page);
> > > @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page,
> > >  	page->mapping = mapping;
> > >  	page->index = index;
> > >  
> > > +	error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page));
> > > +	if (error) {
> > > +		if (!PageSwapCache(page) && PageTransHuge(page)) {
> > > +			count_vm_event(THP_FILE_FALLBACK);
> > > +			count_vm_event(THP_FILE_FALLBACK_CHARGE);
> > > +		}
> > > +		goto error;
> > > +	}
> > > +	cgroup_throttle_swaprate(page, gfp);
> > > +
> > >  	do {
> > >  		void *entry;
> > >  		xas_lock_irq(&xas);
> > > @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page,
> > >  	} while (xas_nomem(&xas, gfp));
> > >  
> > >  	if (xas_error(&xas)) {
> > > -		page->mapping = NULL;
> > > -		page_ref_sub(page, nr);
> > > -		return xas_error(&xas);
> > > +		error = xas_error(&xas);
> > > +		goto error;
> > >  	}
> > >  
> > >  	return 0;
> > > +error:
> > > +	page->mapping = NULL;
> > > +	page_ref_sub(page, nr);
> > > +	return error;
> > >  }
> > >  
> > >  /*
> > > @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
> > >  	struct address_space *mapping = inode->i_mapping;
> > >  	struct shmem_inode_info *info = SHMEM_I(inode);
> > >  	struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
> > > -	struct mem_cgroup *memcg;
> > >  	struct page *page;
> > >  	swp_entry_t swap;
> > >  	int error;
> > > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
> > >  			goto failed;
> > >  	}
> > >  
> > > -	error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg);
> > > -	if (!error) {
> > > -		error = shmem_add_to_page_cache(page, mapping, index,
> > > -						swp_to_radix_entry(swap), gfp);
> > > -		/*
> > > -		 * We already confirmed swap under page lock, and make
> > > -		 * no memory allocation here, so usually no possibility
> > > -		 * of error; but free_swap_and_cache() only trylocks a
> > > -		 * page, so it is just possible that the entry has been
> > > -		 * truncated or holepunched since swap was confirmed.
> > > -		 * shmem_undo_range() will have done some of the
> > > -		 * unaccounting, now delete_from_swap_cache() will do
> > > -		 * the rest.
> > > -		 */
> > > -		if (error) {
> > > -			mem_cgroup_cancel_charge(page, memcg);
> > > -			delete_from_swap_cache(page);
> > > -		}
> > > -	}
> > > -	if (error)
> > > +	error = shmem_add_to_page_cache(page, mapping, index,
> > > +					swp_to_radix_entry(swap), gfp,
> > > +					charge_mm);
> > > +	/*
> > > +	 * We already confirmed swap under page lock, and make no
> > > +	 * memory allocation here, so usually no possibility of error;
> > > +	 * but free_swap_and_cache() only trylocks a page, so it is
> > > +	 * just possible that the entry has been truncated or
> > > +	 * holepunched since swap was confirmed.  shmem_undo_range()
> > > +	 * will have done some of the unaccounting, now
> > > +	 * delete_from_swap_cache() will do the rest.
> > > +	 */
> > > +	if (error) {
> > > +		delete_from_swap_cache(page);
> > >  		goto failed;
> > 
> > -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled
> > differently. delete_from_swap_cache() is for -EEXIST case.
> 
> Good catch, I accidentally changed things here.
> 
> I was just going to change it back, but now I'm trying to understand
> how it actually works.
> 
> Who is removing the page from swap cache if shmem_undo_range() races
> but we fail to charge the page?
> 
> Here is how this race is supposed to be handled: The page is in the
> swapcache, we have it locked and confirmed that the entry in i_pages
> is indeed a swap entry. We charge the page, then we try to replace the
> swap entry in i_pages with the actual page. If we determine, under
> tree lock now, that shmem_undo_range has raced with us, unaccounted
> the swap space, but must have failed to get the page lock, we remove
> the page from swap cache on our side, to free up swap slot and page.
> 
> But what if shmem_undo_range() raced with us, deleted the swap entry
> from i_pages while we had the page locked, but then we simply failed
> to charge? We unlock the page and return -EEXIST (shmem_confirm_swap
> at the exit). The page with its userdata is now in swapcache, but no
> corresponding swap entry in i_pages. shmem_getpage_gfp() sees the
> -EEXIST, retries, finds nothing in i_pages and allocates a new, empty
> page.
> 
> Aren't we leaking the swap slot and the page?

Yes, you're right! It seems that it's possible to leak the swap slot
and the page. Race could happen for all the places after lock_page()
and shmem_confirm_swap() are done. And, I think that it's not possible
to fix the problem in shmem_swapin_page() side since we can't know the
timing that trylock_page() is called. Maybe, solution would be,
instead of using free_swap_and_cache() in shmem_undo_range() that
calls trylock_page(), to use another function that calls lock_page().

Thanks.


  reply	other threads:[~2020-04-23  5:25 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-20 22:11 [PATCH 00/18] mm: memcontrol: charge swapin pages on instantiation Johannes Weiner
2020-04-20 22:11 ` [PATCH 01/18] mm: fix NUMA node file count error in replace_page_cache() Johannes Weiner
2020-04-21  8:28   ` Alex Shi
2020-04-21 19:13   ` Shakeel Butt
2020-04-22  6:34   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 02/18] mm: memcontrol: fix theoretical race in charge moving Johannes Weiner
2020-04-22  6:36   ` Joonsoo Kim
2020-04-22 16:51   ` Shakeel Butt
2020-04-22 17:42     ` Johannes Weiner
2020-04-22 18:01       ` Shakeel Butt
2020-04-22 18:02   ` Shakeel Butt
2020-04-20 22:11 ` [PATCH 03/18] mm: memcontrol: drop @compound parameter from memcg charging API Johannes Weiner
2020-04-21  9:11   ` Alex Shi
2020-04-22  6:37   ` Joonsoo Kim
2020-04-22 17:30   ` Shakeel Butt
2020-04-20 22:11 ` [PATCH 04/18] mm: memcontrol: move out cgroup swaprate throttling Johannes Weiner
2020-04-21  9:11   ` Alex Shi
2020-04-22  6:37   ` Joonsoo Kim
2020-04-22 22:20   ` Shakeel Butt
2020-04-20 22:11 ` [PATCH 05/18] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Johannes Weiner
2020-04-21  9:12   ` Alex Shi
2020-04-22  6:40   ` Joonsoo Kim
2020-04-22 12:09     ` Johannes Weiner
2020-04-23  5:25       ` Joonsoo Kim [this message]
2020-05-08 16:01         ` Johannes Weiner
2020-05-11  1:57           ` Joonsoo Kim
2020-05-11  7:38           ` Hugh Dickins
2020-05-11 15:06             ` Johannes Weiner
2020-05-11 16:32               ` Hugh Dickins
2020-05-11 18:10                 ` Johannes Weiner
2020-05-11 18:12                   ` Johannes Weiner
2020-05-11 18:44                   ` Hugh Dickins
2020-04-20 22:11 ` [PATCH 06/18] mm: memcontrol: prepare uncharging for removal of private page type counters Johannes Weiner
2020-04-21  9:12   ` Alex Shi
2020-04-22  6:41   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 07/18] mm: memcontrol: prepare move_account " Johannes Weiner
2020-04-21  9:13   ` Alex Shi
2020-04-22  6:41   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 08/18] mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters Johannes Weiner
2020-04-22  6:42   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 09/18] mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters Johannes Weiner
2020-04-22  6:42   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 10/18] mm: memcontrol: switch to native NR_ANON_MAPPED counter Johannes Weiner
2020-04-22  6:51   ` Joonsoo Kim
2020-04-22 12:28     ` Johannes Weiner
2020-04-23  5:27       ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 11/18] mm: memcontrol: switch to native NR_ANON_THPS counter Johannes Weiner
2020-04-24  0:29   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 12/18] mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API Johannes Weiner
2020-04-24  0:29   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 13/18] mm: memcontrol: drop unused try/commit/cancel charge API Johannes Weiner
2020-04-24  0:30   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 14/18] mm: memcontrol: prepare swap controller setup for integration Johannes Weiner
2020-04-24  0:30   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 15/18] mm: memcontrol: make swap tracking an integral part of memory control Johannes Weiner
2020-04-21  9:27   ` Alex Shi
2020-04-21 14:39     ` Johannes Weiner
2020-04-22  3:14       ` Alex Shi
2020-04-22 13:30         ` Johannes Weiner
2020-04-22 13:40           ` Alex Shi
2020-04-22 13:43           ` Alex Shi
2020-04-24  0:30   ` Joonsoo Kim
2020-04-24  3:01   ` Johannes Weiner
2020-04-20 22:11 ` [PATCH 16/18] mm: memcontrol: charge swapin pages on instantiation Johannes Weiner
2020-04-21  9:21   ` Alex Shi
2020-04-24  0:44   ` Joonsoo Kim
2020-04-24  2:51     ` Johannes Weiner
2020-04-28  6:49       ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 17/18] mm: memcontrol: delete unused lrucare handling Johannes Weiner
2020-04-24  0:46   ` Joonsoo Kim
2020-04-20 22:11 ` [PATCH 18/18] mm: memcontrol: update page->mem_cgroup stability rules Johannes Weiner
2020-04-21  9:20   ` Alex Shi
2020-04-24  0:48   ` Joonsoo Kim
2020-04-21  9:10 ` Hillf Danton
2020-04-21 14:34   ` Johannes Weiner
2020-04-21  9:32 ` [PATCH 00/18] mm: memcontrol: charge swapin pages on instantiation Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200423052450.GA12538@js1304-desktop \
    --to=js1304@gmail.com \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kernel-team@fb.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).