linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qian Cai <quic_qiancai@quicinc.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Nicolas Saenz Julienne <nsaenzju@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Michal Hocko <mhocko@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	"Paul E. McKenney" <paulmck@kernel.org>
Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3
Date: Wed, 18 May 2022 12:27:22 -0400	[thread overview]
Message-ID: <YoUealVA1bMaSH2l@qian> (raw)
In-Reply-To: <20220518125152.GQ3441@techsingularity.net>

On Wed, May 18, 2022 at 01:51:52PM +0100, Mel Gorman wrote:
> On Tue, May 17, 2022 at 07:35:07PM -0400, Qian Cai wrote:
> > On Thu, May 12, 2022 at 09:50:37AM +0100, Mel Gorman wrote:
> > > Changelog since v2
> > > o More conversions from page->lru to page->[pcp_list|buddy_list]
> > > o Additional test results in changelogs
> > > 
> > > Changelog since v1
> > > o Fix unsafe RT locking scheme
> > > o Use spin_trylock on UP PREEMPT_RT
> > > 
> > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote
> > > per-cpu lists drain support" -- avoid interference of a high priority
> > > task due to a workqueue item draining per-cpu page lists. While many
> > > workloads can tolerate a brief interruption, it may be cause a real-time
> > > task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum,
> > > the draining in non-deterministic.
> > > 
> > > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists.
> > > The local_lock on its own prevents migration and the IRQ disabling protects
> > > from corruption due to an interrupt arriving while a page allocation is
> > > in progress. The locking is inherently unsafe for remote access unless
> > > the CPU is hot-removed.
> > > 
> > > This series adjusts the locking. A spinlock is added to struct
> > > per_cpu_pages to protect the list contents while local_lock_irq continues
> > > to prevent migration and IRQ reentry. This allows a remote CPU to safely
> > > drain a remote per-cpu list.
> > > 
> > > This series is a partial series. Follow-on work should allow the
> > > local_irq_save to be converted to a local_irq to avoid IRQs being
> > > disabled/enabled in most cases. Consequently, there are some TODO comments
> > > highlighting the places that would change if local_irq was used. However,
> > > there are enough corner cases that it deserves a series on its own
> > > separated by one kernel release and the priority right now is to avoid
> > > interference of high priority tasks.
> > 
> > Reverting the whole series fixed an issue that offlining a memory
> > section blocking for hours on today's linux-next tree.
> > 
> >  __wait_rcu_gp
> >  synchronize_rcu at kernel/rcu/tree.c:3915
> >  lru_cache_disable at mm/swap.c:886
> >  __alloc_contig_migrate_range at mm/page_alloc.c:9078
> >  isolate_single_pageblock at mm/page_isolation.c:405
> >  start_isolate_page_range
> >  offline_pages
> >  memory_subsys_offline
> >  device_offline
> >  online_store
> >  dev_attr_store
> >  sysfs_kf_write
> >  kernfs_fop_write_iter
> >  new_sync_write
> >  vfs_write
> >  ksys_write
> >  __arm64_sys_write
> >  invoke_syscall
> >  el0_svc_common.constprop.0
> >  do_el0_svc
> >  el0_svc
> >  el0t_64_sync_handler
> >  el0t_64_sync
> > 
> > For full disclosure, I have also reverted the commit 0d523026abd4
> > ("mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked()"), so the
> > series can be reverted cleanly. But, I can't see how the commit
> > 0d523026abd4 could cause this issue at all.
> 
> This is halting in __lru_add_drain_all where it calls synchronize_rcu
> before the drain even happens. It's also an LRU drain and not PCP which
> is what the series affects and the allocator doesn't use rcu. In a KVM
> machine, I can do
> 
> $ for BANK in `(for i in {1..20}; do echo $((RANDOM%416)); done) | sort -n  | uniq`; do BEFORE=`cat /sys/devices/system/memory/memory$BANK/online`; echo 0 > /sys/devices/system/memory/memory$BANK/online; AFTER=`cat /sys/devices/system/memory/memory$BANK/online`; printf "%4d %d -> %d\n" $BANK $BEFORE $AFTER; done
>    3 1 -> 0
>   57 1 -> 0
>   74 1 -> 0
>   93 1 -> 0
>  101 1 -> 0
>  128 1 -> 0
>  133 1 -> 0
>  199 1 -> 0
>  223 1 -> 0
>  225 1 -> 0
>  229 1 -> 0
>  243 1 -> 0
>  263 1 -> 0
>  300 1 -> 0
>  309 1 -> 0
>  329 1 -> 0
>  355 1 -> 0
>  365 1 -> 0
>  372 1 -> 0
>  383 1 -> 0
> 
> It offlines 20 sections although after several attempts free -m starts
> reporting negative used memory so there is a bug of some description.
> How are you testing this exactly? Is it every time or intermittent? Are
> you confident that reverting the series makes the problem go away?

Cc'ing Paul. Either reverting this series or Paul's 3 patches below from
today's linux-next tree fixed the issue.

ca52639daa5b rcu-tasks: Drive synchronous grace periods from calling task
89ad98e93ce8 rcu-tasks: Move synchronize_rcu_tasks_generic() down
0d90e7225fb1 rcu-tasks: Split rcu_tasks_one_gp() from rcu_tasks_kthread()

It was reproduced by running this script below on an arm64 server. I can
reproduce it every time within 5 attempts. I noticed that when it happens,
we have a few rcu kthreads all are stuck in this line,

        rcuwait_wait_event(&rtp->cbs_wait,
                           (needgpcb = rcu_tasks_need_gpcb(rtp)),
                           TASK_IDLE);

rcu_tasks_kthread
rcu_tasks_rude_kthread
[rcu_tasks_trace_kthread


#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0

import os
import re
import subprocess


def mem_iter():
    base_dir = '/sys/devices/system/memory/'
    for curr_dir in os.listdir(base_dir):
        if re.match(r'memory\d+', curr_dir):
            yield base_dir + curr_dir


if __name__ == '__main__':
    print('- Try to remove each memory section and then add it back.')
    for mem_dir in mem_iter():
        status = f'{mem_dir}/online'
        if open(status).read().rstrip() == '1':
            # This could expectedly fail due to many reasons.
            section = os.path.basename(mem_dir)
            print(f'- Try to remove {section}.')
            proc = subprocess.run([f'echo 0 | sudo tee {status}'], shell=True)
            if proc.returncode == 0:
                print(f'- Try to add {section}.')
                subprocess.check_call([f'echo 1 | sudo tee {status}'], shell=True)


  reply	other threads:[~2022-05-18 16:29 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12  8:50 [PATCH 0/6] Drain remote per-cpu directly v3 Mel Gorman
2022-05-12  8:50 ` [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-05-13 11:59   ` Nicolas Saenz Julienne
2022-05-19  9:36   ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-05-19  9:45   ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-05-13 12:01   ` Nicolas Saenz Julienne
2022-05-19  9:52   ` Vlastimil Babka
2022-05-23 16:09   ` Qais Yousef
2022-05-24 11:55     ` Mel Gorman
2022-05-25 11:23       ` Qais Yousef
2022-05-12  8:50 ` [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Mel Gorman
2022-05-13 12:03   ` Nicolas Saenz Julienne
2022-05-19 10:57   ` Vlastimil Babka
2022-05-19 12:13     ` Mel Gorman
2022-05-19 12:26       ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-05-13 12:22   ` Nicolas Saenz Julienne
2022-05-12  8:50 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-05-12 19:37   ` Andrew Morton
2022-05-13 15:04     ` Mel Gorman
2022-05-13 15:19       ` Nicolas Saenz Julienne
2022-05-13 18:23         ` Mel Gorman
2022-05-17 12:57           ` Mel Gorman
2022-05-12 19:43 ` [PATCH 0/6] Drain remote per-cpu directly v3 Andrew Morton
2022-05-13 14:23   ` Mel Gorman
2022-05-13 19:38     ` Andrew Morton
2022-05-16 10:53       ` Mel Gorman
2022-05-13 12:24 ` Nicolas Saenz Julienne
2022-05-17 23:35 ` Qian Cai
2022-05-18 12:51   ` Mel Gorman
2022-05-18 16:27     ` Qian Cai [this message]
2022-05-18 17:15       ` Paul E. McKenney
2022-05-19 13:29         ` Qian Cai
2022-05-19 19:15           ` Paul E. McKenney
2022-05-19 21:05             ` Qian Cai
2022-05-19 21:29               ` Paul E. McKenney
2022-05-18 17:26   ` Marcelo Tosatti
2022-05-18 17:44     ` Marcelo Tosatti
2022-05-18 18:01 ` Nicolas Saenz Julienne
2022-05-26 17:19 ` Qian Cai
2022-05-27  8:39   ` Mel Gorman
2022-05-27 12:58     ` Qian Cai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YoUealVA1bMaSH2l@qian \
    --to=quic_qiancai@quicinc.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).