All of lore.kernel.org
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	Miguel de Dios <migueldedios@google.com>,
	Wei Wang <wvw@google.com>, Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@techsingularity.net>
Subject: Re: [PATCH] mm: release the spinlock on zap_pte_range
Date: Mon, 29 Jul 2019 17:20:52 +0900	[thread overview]
Message-ID: <20190729082052.GA258885@google.com> (raw)
In-Reply-To: <20190729074523.GC9330@dhcp22.suse.cz>

On Mon, Jul 29, 2019 at 09:45:23AM +0200, Michal Hocko wrote:
> On Mon 29-07-19 16:10:37, Minchan Kim wrote:
> > In our testing(carmera recording), Miguel and Wei found unmap_page_range
> > takes above 6ms with preemption disabled easily. When I see that, the
> > reason is it holds page table spinlock during entire 512 page operation
> > in a PMD. 6.2ms is never trivial for user experince if RT task couldn't
> > run in the time because it could make frame drop or glitch audio problem.
> 
> Where is the time spent during the tear down? 512 pages doesn't sound
> like a lot to tear down. Is it the TLB flushing?

Miguel confirmed there is no such big latency without mark_page_accessed
in zap_pte_range so I guess it's the contention of LRU lock as well as
heavy activate_page overhead which is not trivial, either.

> 
> > This patch adds preemption point like coyp_pte_range.
> > 
> > Reported-by: Miguel de Dios <migueldedios@google.com>
> > Reported-by: Wei Wang <wvw@google.com>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Mel Gorman <mgorman@techsingularity.net>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/memory.c | 19 ++++++++++++++++---
> >  1 file changed, 16 insertions(+), 3 deletions(-)
> > 
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 2e796372927fd..bc3e0c5e4f89b 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -1007,6 +1007,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> >  				struct zap_details *details)
> >  {
> >  	struct mm_struct *mm = tlb->mm;
> > +	int progress = 0;
> >  	int force_flush = 0;
> >  	int rss[NR_MM_COUNTERS];
> >  	spinlock_t *ptl;
> > @@ -1022,7 +1023,16 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> >  	flush_tlb_batched_pending(mm);
> >  	arch_enter_lazy_mmu_mode();
> >  	do {
> > -		pte_t ptent = *pte;
> > +		pte_t ptent;
> > +
> > +		if (progress >= 32) {
> > +			progress = 0;
> > +			if (need_resched())
> > +				break;
> > +		}
> > +		progress += 8;
> 
> Why 8?

Just copied from copy_pte_range.

  reply	other threads:[~2019-07-29  8:21 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-29  7:10 [PATCH] mm: release the spinlock on zap_pte_range Minchan Kim
2019-07-29  7:45 ` Michal Hocko
2019-07-29  8:20   ` Minchan Kim [this message]
2019-07-29  8:35     ` Michal Hocko
2019-07-30 12:11       ` Minchan Kim
2019-07-30 12:32         ` Michal Hocko
2019-07-30 12:39           ` Minchan Kim
2019-07-30 12:57             ` Michal Hocko
2019-07-31  5:44               ` Minchan Kim
2019-07-31  7:21                 ` Michal Hocko
2019-08-06 10:55                   ` Minchan Kim
2019-08-09 12:43                     ` [RFC PATCH] mm: drop mark_page_access from the unmap path Michal Hocko
2019-08-09 17:57                       ` Mel Gorman
2019-08-09 18:34                       ` Johannes Weiner
2019-08-12  8:09                         ` Michal Hocko
2019-08-12 15:07                           ` Johannes Weiner
2019-08-13 10:51                             ` Michal Hocko
2019-08-26 12:06                               ` Michal Hocko
2019-08-27 16:00                                 ` Johannes Weiner
2019-08-27 18:41                                   ` Michal Hocko
2019-07-30 19:42     ` [PATCH] mm: release the spinlock on zap_pte_range Andrew Morton
2019-07-31  6:14       ` Minchan Kim
2019-08-06  7:05 ` [mm] 755d6edc1a: will-it-scale.per_process_ops -4.1% regression kernel test robot
2019-08-06  7:05   ` kernel test robot
2019-08-06  8:04   ` Michal Hocko
2019-08-06  8:04     ` Michal Hocko
2019-08-06 11:00     ` Minchan Kim
2019-08-06 11:00       ` Minchan Kim
2019-08-06 11:11       ` Michal Hocko
2019-08-06 11:11         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190729082052.GA258885@google.com \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=migueldedios@google.com \
    --cc=wvw@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.