All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Matthew Wilcox <willy@infradead.org>, Zi Yan <ziy@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, Yang Shi <shy828301@gmail.com>,
	Huang Ying <ying.huang@intel.com>
Subject: Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed
Date: Thu, 7 Mar 2024 08:56:27 +0000	[thread overview]
Message-ID: <da729e0b-4eae-451c-baec-e58a3b5a2752@arm.com> (raw)
In-Reply-To: <ZejmTM1XbE0mPA2A@casper.infradead.org>

On 06/03/2024 21:55, Matthew Wilcox wrote:
> On Wed, Mar 06, 2024 at 07:55:50PM +0000, Matthew Wilcox wrote:
>> Hang on, I think I see it.  It is a race between folio freeing and
>> deferred_split_scan(), but page migration is absolved.  Look:
>>
>> CPU 1: deferred_split_scan:
>> spin_lock_irqsave(split_queue_lock)
>> list_for_each_entry_safe()
>> folio_try_get()
>> list_move(&folio->_deferred_list, &list);
>> spin_unlock_irqrestore(split_queue_lock)
>> list_for_each_entry_safe() {
>> 	folio_trylock() <- fails
>> 	folio_put(folio);
>>
>> CPU 2: folio_put:
>> folio_undo_large_rmappable
>>         ds_queue = get_deferred_split_queue(folio);
>>         spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>>                 list_del_init(&folio->_deferred_list);
>> *** at this point CPU 1 is not holding the split_queue_lock; the
>> folio is on the local list.  Which we just corrupted ***

Wow, this would have taken me weeks...

I just want to make sure I've understood correctly: CPU1's folio_put() is not the last reference, and it keeps iterating through the local list. Then CPU2 does the final folio_put() which causes list_del_init() to modify the local list concurrently with CPU1's iteration, so CPU1 probably goes into the weeds?

>>
>> Now anything can happen.  It's a pretty tight race that involves at
>> least two CPUs (CPU 2 might have been the one to have the folio locked
>> at the time CPU 1 caalled folio_trylock()).  But I definitely widened
>> the window by moving the decrement of the refcount and the removal from
>> the deferred list further apart.
>>
>>
>> OK, so what's the solution here?  Personally I favour using a
>> folio_batch in deferred_split_scan() to hold the folios that we're
>> going to try to remove instead of a linked list.  Other ideas that are
>> perhaps less intrusive?
> 
> I looked at a few options, but I think we need to keep the refcount
> elevated until we've got the folios back on the deferred split list.
> And we can't call folio_put() while holding the split_queue_lock or
> we'll deadlock.  So we need to maintain a list of folios that isn't
> linked through deferred_list.  Anyway, this is basically untested,
> except that it compiles.

If we can't call folio_put() under spinlock, then I agree.

> 
> Opinions?  Better patches?

I assume the fact that one scan is limited to freeing a batch-worth of folios is not a problem? The shrinker will keep calling while there are folios on the deferred list?

> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fd745bcc97ff..0120a47ea7a1 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3312,7 +3312,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>  	struct pglist_data *pgdata = NODE_DATA(sc->nid);
>  	struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
>  	unsigned long flags;
> -	LIST_HEAD(list);
> +	struct folio_batch batch;
>  	struct folio *folio, *next;
>  	int split = 0;
>  
> @@ -3321,37 +3321,41 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>  		ds_queue = &sc->memcg->deferred_split_queue;
>  #endif
>  
> +	folio_batch_init(&batch);
>  	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>  	/* Take pin on all head pages to avoid freeing them under us */
>  	list_for_each_entry_safe(folio, next, &ds_queue->split_queue,
>  							_deferred_list) {
> -		if (folio_try_get(folio)) {
> -			list_move(&folio->_deferred_list, &list);
> -		} else {
> -			/* We lost race with folio_put() */
> -			list_del_init(&folio->_deferred_list);
> -			ds_queue->split_queue_len--;
> +		if (!folio_try_get(folio))
> +			continue;
> +		if (!folio_trylock(folio))
> +			continue;
> +		list_del_init(&folio->_deferred_list);
> +		if (folio_batch_add(&batch, folio) == 0) {
> +			--sc->nr_to_scan;
> +			break;
>  		}
>  		if (!--sc->nr_to_scan)
>  			break;
>  	}
>  	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
>  
> -	list_for_each_entry_safe(folio, next, &list, _deferred_list) {
> -		if (!folio_trylock(folio))
> -			goto next;
> -		/* split_huge_page() removes page from list on success */
> +	while ((folio = folio_batch_next(&batch)) != NULL) {
>  		if (!split_folio(folio))
>  			split++;
>  		folio_unlock(folio);
> -next:
> -		folio_put(folio);
>  	}
>  
>  	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
> -	list_splice_tail(&list, &ds_queue->split_queue);
> +	while ((folio = folio_batch_next(&batch)) != NULL) {
> +		if (!folio_test_large(folio))
> +			continue;
> +		list_add_tail(&folio->_deferred_list, &ds_queue->split_queue);
> +	}
>  	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
>  
> +	folios_put(&batch);
> +
>  	/*
>  	 * Stop shrinker if we didn't split any page, but the queue is empty.
>  	 * This can happen if pages were freed under us.

I've added this patch to my branch and tested (still without the patch that I fingered as the culprit originally, for now). Unfortuantely it is still blowing up at about the same rate, although it looks very different now. I've seen bad things twice. The first time was RCU stalls, but systemd had turned the log level down so no stack trace and I didn't manage to get any further information. The second time, this:

[  338.519401] Unable to handle kernel paging request at virtual address fffc001b13a8c870
[  338.519402] Unable to handle kernel paging request at virtual address fffc001b13a8c870
[  338.519407] Mem abort info:
[  338.519407]   ESR = 0x0000000096000004
[  338.519408]   EC = 0x25: DABT (current EL), IL = 32 bits
[  338.519588] Unable to handle kernel paging request at virtual address fffc001b13a8c870
[  338.519591] Mem abort info:
[  338.519592]   ESR = 0x0000000096000004
[  338.519593]   EC = 0x25: DABT (current EL), IL = 32 bits
[  338.519594]   SET = 0, FnV = 0
[  338.519595]   EA = 0, S1PTW = 0
[  338.519596]   FSC = 0x04: level 0 translation fault
[  338.519597] Data abort info:
[  338.519597]   ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
[  338.519598]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[  338.519599]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[  338.519600] [fffc001b13a8c870] address between user and kernel address ranges
[  338.519602] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
[  338.519605] Modules linked in:
[  338.519607] CPU: 43 PID: 3234 Comm: usemem Not tainted 6.8.0-rc5-00465-g279cb41b481e-dirty #3
[  338.519610] Hardware name: linux,dummy-virt (DT)
[  338.519611] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[  338.519613] pc : down_read_trylock+0x2c/0xd0
[  338.519618] lr : folio_lock_anon_vma_read+0x74/0x2c8
[  338.519623] sp : ffff800087f935c0
[  338.519623] x29: ffff800087f935c0 x28: 0000000000000000 x27: ffff800087f937e0
[  338.519626] x26: 0000000000000001 x25: ffff800087f937a8 x24: fffffc0007258180
[  338.519628] x23: ffff800087f936c8 x22: fffc001b13a8c870 x21: ffff0000f7d51d69
[  338.519630] x20: ffff0000f7d51d68 x19: fffffc0007258180 x18: 0000000000000000
[  338.519632] x17: 0000000000000001 x16: ffff0000c90ab458 x15: 0000000000000040
[  338.519634] x14: ffff0000c8c7b558 x13: 0000000000000228 x12: 000040f22f534640
[  338.519637] x11: 0000000000000000 x10: 0000000000000000 x9 : ffff800080338b3c
[  338.519639] x8 : ffff800087f93618 x7 : 0000000000000000 x6 : ffff0000c9692f50
[  338.519641] x5 : ffff800087f936b0 x4 : 0000000000000001 x3 : ffff0000d70d9140
[  338.519643] x2 : 0000000000000001 x1 : fffc001b13a8c870 x0 : fffc001b13a8c870
[  338.519645] Call trace:
[  338.519646]  down_read_trylock+0x2c/0xd0
[  338.519648]  folio_lock_anon_vma_read+0x74/0x2c8
[  338.519650]  rmap_walk_anon+0x1d8/0x2c0
[  338.519652]  folio_referenced+0x1b4/0x1e0
[  338.519655]  shrink_folio_list+0x768/0x10c8
[  338.519658]  shrink_lruvec+0x5dc/0xb30
[  338.519660]  shrink_node+0x4d8/0x8b0
[  338.519662]  do_try_to_free_pages+0xe0/0x5a8
[  338.519665]  try_to_free_mem_cgroup_pages+0x128/0x2d0
[  338.519667]  try_charge_memcg+0x114/0x658
[  338.519671]  __mem_cgroup_charge+0x6c/0xd0
[  338.519672]  __handle_mm_fault+0x42c/0x1640
[  338.519675]  handle_mm_fault+0x70/0x290
[  338.519677]  do_page_fault+0xfc/0x4d8
[  338.519681]  do_translation_fault+0xa4/0xc0
[  338.519682]  do_mem_abort+0x4c/0xa8
[  338.519685]  el0_da+0x2c/0x78
[  338.519687]  el0t_64_sync_handler+0xb8/0x130
[  338.519689]  el0t_64_sync+0x190/0x198
[  338.519692] Code: aa0003e1 b9400862 11000442 b9000862 (f9400000) 
[  338.519693] ---[ end trace 0000000000000000 ]---

The fault is when trying to do an atomic_long_read(&sem->count) here:

struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
					  struct rmap_walk_control *rwc)
{
	struct anon_vma *anon_vma = NULL;
	struct anon_vma *root_anon_vma;
	unsigned long anon_mapping;

retry:
	rcu_read_lock();
	anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
		goto out;
	if (!folio_mapped(folio))
		goto out;

	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
	root_anon_vma = READ_ONCE(anon_vma->root);
	if (down_read_trylock(&root_anon_vma->rwsem)) { <<<<<<<

I guess we are still corrupting folios?




  reply	other threads:[~2024-03-07  8:56 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-27 17:42 [PATCH v3 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 02/18] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 03/18] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 04/18] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 05/18] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 06/18] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 07/18] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 08/18] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 09/18] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
2024-03-06 13:42   ` Ryan Roberts
2024-03-06 16:09     ` Matthew Wilcox
2024-03-06 16:19       ` Ryan Roberts
2024-03-06 17:41         ` Ryan Roberts
2024-03-06 18:41           ` Zi Yan
2024-03-06 19:55             ` Matthew Wilcox
2024-03-06 21:55               ` Matthew Wilcox
2024-03-07  8:56                 ` Ryan Roberts [this message]
2024-03-07 13:50                   ` Yin, Fengwei
2024-03-07 14:05                     ` Re: Matthew Wilcox
2024-03-07 15:24                       ` Re: Ryan Roberts
2024-03-07 16:24                         ` Re: Ryan Roberts
2024-03-07 23:02                           ` Re: Matthew Wilcox
2024-03-08  1:06                       ` Re: Yin, Fengwei
2024-03-07 17:33                   ` [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox
2024-03-07 18:35                     ` Ryan Roberts
2024-03-07 20:42                       ` Matthew Wilcox
2024-03-08 11:44                     ` Ryan Roberts
2024-03-08 12:09                       ` Ryan Roberts
2024-03-08 14:21                         ` Ryan Roberts
2024-03-08 15:11                           ` Matthew Wilcox
2024-03-08 16:03                             ` Matthew Wilcox
2024-03-08 17:13                               ` Ryan Roberts
2024-03-08 18:09                                 ` Ryan Roberts
2024-03-08 18:18                                   ` Matthew Wilcox
2024-03-09  4:34                                     ` Andrew Morton
2024-03-09  4:52                                       ` Matthew Wilcox
2024-03-09  8:05                                         ` Ryan Roberts
2024-03-09 12:33                                           ` Ryan Roberts
2024-03-10 13:38                                             ` Matthew Wilcox
2024-03-08 15:33                         ` Matthew Wilcox
2024-03-09  6:09                       ` Matthew Wilcox
2024-03-09  7:59                         ` Ryan Roberts
2024-03-09  8:18                           ` Ryan Roberts
2024-03-09  9:38                             ` Ryan Roberts
2024-03-10  4:23                               ` Matthew Wilcox
2024-03-10  8:23                                 ` Ryan Roberts
2024-03-10 11:08                                   ` Matthew Wilcox
2024-03-10 11:01       ` Ryan Roberts
2024-03-10 11:11         ` Matthew Wilcox
2024-03-10 16:31           ` Ryan Roberts
2024-03-10 19:57             ` Matthew Wilcox
2024-03-10 19:59             ` Ryan Roberts
2024-03-10 20:46               ` Matthew Wilcox
2024-03-10 21:52                 ` Matthew Wilcox
2024-03-11  9:01                   ` Ryan Roberts
2024-03-11 12:26                     ` Matthew Wilcox
2024-03-11 12:36                       ` Ryan Roberts
2024-03-11 15:50                         ` Matthew Wilcox
2024-03-11 16:14                           ` Ryan Roberts
2024-03-11 17:49                             ` Matthew Wilcox
2024-03-12 11:57                               ` Ryan Roberts
2024-03-11 19:26                             ` Matthew Wilcox
2024-03-10 11:14         ` Ryan Roberts
2024-02-27 17:42 ` [PATCH v3 11/18] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 12/18] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 13/18] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 14/18] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 15/18] mm: Remove lru_to_page() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 16/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 17/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2024-02-27 17:42 ` [PATCH v3 18/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=da729e0b-4eae-451c-baec-e58a3b5a2752@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.