All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Qian Cai <quic_qiancai@quicinc.com>
Cc: Mel Gorman <mgorman@techsingularity.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Nicolas Saenz Julienne <nsaenzju@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Michal Hocko <mhocko@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	kafai@fb.com, kpsingh@kernel.org
Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3
Date: Thu, 19 May 2022 14:29:39 -0700	[thread overview]
Message-ID: <20220519212939.GE1790663@paulmck-ThinkPad-P17-Gen-1> (raw)
In-Reply-To: <YoaxAMvQwHzDPxyi@qian>

On Thu, May 19, 2022 at 05:05:04PM -0400, Qian Cai wrote:
> On Thu, May 19, 2022 at 12:15:24PM -0700, Paul E. McKenney wrote:
> > Is the task doing offline_pages()->synchronize_rcu() doing this
> > repeatedly?  Or is there a stalled RCU grace period?  (From what
> > I can see, offline_pages() is not doing huge numbers of calls to
> > synchronize_rcu() in any of its loops, but I freely admit that I do not
> > know this code.)
> 
> Yes, we are running into an endless loop in isolate_single_pageblock().
> There was a similar issue happened not long ago, so I am wondering if we
> did not solve it entirely then. Anyway, I will continue the thread over
> there.
> 
> https://lore.kernel.org/all/YoavU%2F+NfQIzQiDF@qian/

I do know that feeling.

> > Or is it possible that reverting those three patches simply decreases
> > the probability of failure, rather than eliminating the failure?
> > Such a decrease could be due to many things, for example, changes to
> > offsets and sizes of data structures.
> 
> Entirely possible. Sorry for the false alarm.

Not a problem!

> > Do you ever see RCU CPU stall warnings?
> 
> No.

OK, then perhaps a sequence of offline_pages() calls.

Hmmm...  The percpu_up_write() function sets ->block to zero before
awakening waiters.  Given wakeup latencies, might this allow an only
somewhat unfortunate sequence of events to allow offline_pages() to
starve readers?  Or is there something I am missing that prevents this
from happening?

							Thanx, Paul

  reply	other threads:[~2022-05-19 21:29 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12  8:50 [PATCH 0/6] Drain remote per-cpu directly v3 Mel Gorman
2022-05-12  8:50 ` [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-05-13 11:59   ` Nicolas Saenz Julienne
2022-05-19  9:36   ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-05-19  9:45   ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-05-13 12:01   ` Nicolas Saenz Julienne
2022-05-19  9:52   ` Vlastimil Babka
2022-05-23 16:09   ` Qais Yousef
2022-05-24 11:55     ` Mel Gorman
2022-05-25 11:23       ` Qais Yousef
2022-05-12  8:50 ` [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Mel Gorman
2022-05-13 12:03   ` Nicolas Saenz Julienne
2022-05-19 10:57   ` Vlastimil Babka
2022-05-19 12:13     ` Mel Gorman
2022-05-19 12:26       ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-05-13 12:22   ` Nicolas Saenz Julienne
2022-05-12  8:50 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-05-12 19:37   ` Andrew Morton
2022-05-13 15:04     ` Mel Gorman
2022-05-13 15:19       ` Nicolas Saenz Julienne
2022-05-13 18:23         ` Mel Gorman
2022-05-17 12:57           ` Mel Gorman
2022-05-12 19:43 ` [PATCH 0/6] Drain remote per-cpu directly v3 Andrew Morton
2022-05-13 14:23   ` Mel Gorman
2022-05-13 19:38     ` Andrew Morton
2022-05-16 10:53       ` Mel Gorman
2022-05-13 12:24 ` Nicolas Saenz Julienne
2022-05-17 23:35 ` Qian Cai
2022-05-18 12:51   ` Mel Gorman
2022-05-18 16:27     ` Qian Cai
2022-05-18 17:15       ` Paul E. McKenney
2022-05-19 13:29         ` Qian Cai
2022-05-19 19:15           ` Paul E. McKenney
2022-05-19 21:05             ` Qian Cai
2022-05-19 21:29               ` Paul E. McKenney [this message]
2022-05-18 17:26   ` Marcelo Tosatti
2022-05-18 17:44     ` Marcelo Tosatti
2022-05-18 18:01 ` Nicolas Saenz Julienne
2022-05-26 17:19 ` Qian Cai
2022-05-27  8:39   ` Mel Gorman
2022-05-27 12:58     ` Qian Cai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220519212939.GE1790663@paulmck-ThinkPad-P17-Gen-1 \
    --to=paulmck@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=quic_qiancai@quicinc.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.