All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Aaron Lu <aaron.lu@intel.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Dave Hansen <dave.hansen@intel.com>,
	Tim Chen <tim.c.chen@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Ying Huang <ying.huang@intel.com>
Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory
Date: Thu, 16 Mar 2017 11:36:21 -0700	[thread overview]
Message-ID: <1489689381.2733.114.camel@linux.intel.com> (raw)
In-Reply-To: <20170316090732.GF30501@dhcp22.suse.cz>

On Thu, 2017-03-16 at 10:07 +0100, Michal Hocko wrote:
> On Wed 15-03-17 14:38:34, Tim Chen wrote:
> > 
> > On Wed, 2017-03-15 at 17:28 +0100, Michal Hocko wrote:
> > > 
> > > On Wed 15-03-17 23:44:07, Aaron Lu wrote:
> > > > 
> > > > 
> > > > On Wed, Mar 15, 2017 at 03:18:14PM +0100, Michal Hocko wrote:
> > > > > 
> > > > > 
> > > > > On Wed 15-03-17 16:59:59, Aaron Lu wrote:
> > > > > [...]
> > > > > > 
> > > > > > 
> > > > > > The proposed parallel free did this: if the process has many pages to be
> > > > > > freed, accumulate them in these struct mmu_gather_batch(es) one after
> > > > > > another till 256K pages are accumulated. Then take this singly linked
> > > > > > list starting from tlb->local.next off struct mmu_gather *tlb and free
> > > > > > them in a worker thread. The main thread can return to continue zap
> > > > > > other pages(after freeing pages pointed by tlb->local.pages).
> > > > > I didn't have a look at the implementation yet but there are two
> > > > > concerns that raise up from this description. Firstly how are we going
> > > > > to tune the number of workers. I assume there will be some upper bound
> > > > > (one of the patch subject mentions debugfs for tuning) and secondly
> > > > The workers are put in a dedicated workqueue which is introduced in
> > > > patch 3/5 and the number of workers can be tuned through that workqueue's
> > > > sysfs interface: max_active.
> > > I suspect we cannot expect users to tune this. What do you consider a
> > > reasonable default?
> > From Aaron's data, it seems like 4 is a reasonable value for max_active:
> > 
> > max_active:   time
> > 1             8.9s   ±0.5%
> > 2             5.65s  ±5.5%
> > 4             4.84s  ±0.16%
> > 8             4.77s  ±0.97%
> > 16            4.85s  ±0.77%
> > 32            6.21s  ±0.46%
> OK, but this will depend on the HW, right? Also now that I am looking at
> those numbers more closely. This was about unmapping 320GB area and
> using 4 times more CPUs you managed to half the run time. Is this really
> worth it? Sure if those CPUs were idle then this is a clear win but if
> the system is moderately busy then it doesn't look like a clear win to
> me.

It looks like we can reduce the exit time in half by using only 2 workers
to disturb the system minimally.
Perhaps we can only do this expedited exit only when there are idle cpus around.
We can use the root sched domain's overload indicator for such a quick check.

Tim

WARNING: multiple messages have this Message-ID (diff)
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Aaron Lu <aaron.lu@intel.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Dave Hansen <dave.hansen@intel.com>,
	Tim Chen <tim.c.chen@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Ying Huang <ying.huang@intel.com>
Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory
Date: Thu, 16 Mar 2017 11:36:21 -0700	[thread overview]
Message-ID: <1489689381.2733.114.camel@linux.intel.com> (raw)
In-Reply-To: <20170316090732.GF30501@dhcp22.suse.cz>

On Thu, 2017-03-16 at 10:07 +0100, Michal Hocko wrote:
> On Wed 15-03-17 14:38:34, Tim Chen wrote:
> > 
> > On Wed, 2017-03-15 at 17:28 +0100, Michal Hocko wrote:
> > > 
> > > On Wed 15-03-17 23:44:07, Aaron Lu wrote:
> > > > 
> > > > 
> > > > On Wed, Mar 15, 2017 at 03:18:14PM +0100, Michal Hocko wrote:
> > > > > 
> > > > > 
> > > > > On Wed 15-03-17 16:59:59, Aaron Lu wrote:
> > > > > [...]
> > > > > > 
> > > > > > 
> > > > > > The proposed parallel free did this: if the process has many pages to be
> > > > > > freed, accumulate them in these struct mmu_gather_batch(es) one after
> > > > > > another till 256K pages are accumulated. Then take this singly linked
> > > > > > list starting from tlb->local.next off struct mmu_gather *tlb and free
> > > > > > them in a worker thread. The main thread can return to continue zap
> > > > > > other pages(after freeing pages pointed by tlb->local.pages).
> > > > > I didn't have a look at the implementation yet but there are two
> > > > > concerns that raise up from this description. Firstly how are we going
> > > > > to tune the number of workers. I assume there will be some upper bound
> > > > > (one of the patch subject mentions debugfs for tuning) and secondly
> > > > The workers are put in a dedicated workqueue which is introduced in
> > > > patch 3/5 and the number of workers can be tuned through that workqueue's
> > > > sysfs interface: max_active.
> > > I suspect we cannot expect users to tune this. What do you consider a
> > > reasonable default?
> > From Aaron's data, it seems like 4 is a reasonable value for max_active:
> > 
> > max_active:A A A time
> > 1A A A A A A A A A A A A A 8.9sA A A A+-0.5%
> > 2A A A A A A A A A A A A A 5.65sA A A+-5.5%
> > 4A A A A A A A A A A A A A 4.84sA A A+-0.16%
> > 8A A A A A A A A A A A A A 4.77sA A A+-0.97%
> > 16A A A A A A A A A A A A 4.85sA A A+-0.77%
> > 32A A A A A A A A A A A A 6.21sA A A+-0.46%
> OK, but this will depend on the HW, right? Also now that I am looking at
> those numbers more closely. This was about unmapping 320GB area and
> using 4 times more CPUs you managed to half the run time. Is this really
> worth it? Sure if those CPUs were idle then this is a clear win but if
> the system is moderately busy then it doesn't look like a clear win to
> me.

It looks like we can reduce the exit time in half by using only 2 workers
to disturb the system minimally.
Perhaps we can only do this expedited exit only when there are idle cpus around.
We can use the root sched domain's overload indicator for such a quick check.

Tim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-03-16 18:36 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-15  8:59 [PATCH v2 0/5] mm: support parallel free of memory Aaron Lu
2017-03-15  8:59 ` Aaron Lu
2017-03-15  9:00 ` [PATCH v2 1/5] mm: add tlb_flush_mmu_free_batches Aaron Lu
2017-03-15  9:00   ` Aaron Lu
2017-03-15  9:00 ` [PATCH v2 2/5] mm: parallel free pages Aaron Lu
2017-03-15  9:00   ` Aaron Lu
2017-03-15  9:42   ` Hillf Danton
2017-03-15  9:42     ` Hillf Danton
2017-03-15 11:54     ` Aaron Lu
2017-03-15 11:54       ` Aaron Lu
2017-03-15  9:00 ` [PATCH v2 3/5] mm: use a dedicated workqueue for the free workers Aaron Lu
2017-03-15  9:00   ` Aaron Lu
2017-03-22  6:33   ` Minchan Kim
2017-03-22  6:33     ` Minchan Kim
2017-03-22  8:41     ` Aaron Lu
2017-03-22  8:41       ` Aaron Lu
2017-03-22  8:55       ` Minchan Kim
2017-03-22  8:55         ` Minchan Kim
2017-03-22 13:43         ` Aaron Lu
2017-03-22 13:43           ` Aaron Lu
2017-03-23  5:53           ` Minchan Kim
2017-03-23  5:53             ` Minchan Kim
2017-03-23 15:38       ` Dave Hansen
2017-03-23 15:38         ` Dave Hansen
2017-03-24 12:37         ` Aaron Lu
2017-03-24 12:37           ` Aaron Lu
2017-03-15  9:00 ` [PATCH v2 4/5] mm: add force_free_pages in zap_pte_range Aaron Lu
2017-03-15  9:00   ` Aaron Lu
2017-03-15  9:00 ` [PATCH v2 5/5] mm: add debugfs interface for parallel free tuning Aaron Lu
2017-03-15  9:00   ` Aaron Lu
2017-03-15 14:18 ` [PATCH v2 0/5] mm: support parallel free of memory Michal Hocko
2017-03-15 14:18   ` Michal Hocko
2017-03-15 15:44   ` Aaron Lu
2017-03-15 15:44     ` Aaron Lu
2017-03-15 16:28     ` Michal Hocko
2017-03-15 16:28       ` Michal Hocko
2017-03-15 21:38       ` Tim Chen
2017-03-15 21:38         ` Tim Chen
2017-03-16  9:07         ` Michal Hocko
2017-03-16  9:07           ` Michal Hocko
2017-03-16 18:36           ` Tim Chen [this message]
2017-03-16 18:36             ` Tim Chen
2017-03-17  7:47             ` Michal Hocko
2017-03-17  7:47               ` Michal Hocko
2017-03-17  8:07               ` Minchan Kim
2017-03-17  8:07                 ` Minchan Kim
2017-03-17 12:33               ` Aaron Lu
2017-03-17 12:33                 ` Aaron Lu
2017-03-17 12:59                 ` Michal Hocko
2017-03-17 12:59                   ` Michal Hocko
2017-03-17 13:16                 ` Peter Zijlstra
2017-03-17 13:16                   ` Peter Zijlstra
2017-03-17 12:53               ` Peter Zijlstra
2017-03-17 12:53                 ` Peter Zijlstra
2017-03-17 13:05                 ` Michal Hocko
2017-03-17 13:05                   ` Michal Hocko
2017-03-21 14:54           ` Dave Hansen
2017-03-21 14:54             ` Dave Hansen
2017-03-22  8:02             ` Aaron Lu
2017-03-22  8:02               ` Aaron Lu
2017-03-24  7:04             ` Aaron Lu
2017-03-24  7:04               ` Aaron Lu
2017-03-21 15:18           ` Tim Chen
2017-03-21 15:18             ` Tim Chen
2017-03-16  6:54       ` Aaron Lu
2017-03-16  6:54         ` Aaron Lu
2017-03-16  7:34       ` Aaron Lu
2017-03-16  7:34         ` Aaron Lu
2017-03-16 13:51         ` Aaron Lu
2017-03-16 13:51           ` Aaron Lu
2017-03-16 14:14           ` Aaron Lu
2017-03-16 14:14             ` Aaron Lu
2017-03-15 14:56 ` Vlastimil Babka
2017-03-15 14:56   ` Vlastimil Babka
2017-03-15 15:50   ` Aaron Lu
2017-03-15 15:50     ` Aaron Lu
2017-03-17  3:10   ` Aaron Lu
2017-03-17  3:10     ` Aaron Lu
2017-03-16 19:38 ` Alex Thorlton
2017-03-16 19:38   ` Alex Thorlton
2017-03-17  2:21   ` Aaron Lu
2017-03-17  2:21     ` Aaron Lu
2017-03-20 19:15     ` Alex Thorlton
2017-03-20 19:15       ` Alex Thorlton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1489689381.2733.114.camel@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=aaron.lu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=tim.c.chen@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.