All of lore.kernel.org
 help / color / mirror / Atom feed
From: Balbir Singh <bsingharora@gmail.com>
To: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	mhocko@suse.com, vbabka@suse.cz, mgorman@suse.de,
	minchan@kernel.org, aneesh.kumar@linux.vnet.ibm.com,
	srikar@linux.vnet.ibm.com, haren@linux.vnet.ibm.com,
	jglisse@redhat.com, dave.hansen@intel.com,
	dan.j.williams@intel.com, zi.yan@cs.rutgers.edu
Subject: Re: [PATCH 0/6] Enable parallel page migration
Date: Wed, 22 Feb 2017 21:52:11 +1100	[thread overview]
Message-ID: <aaafb3b7-66db-36a2-7514-a826f295fead@gmail.com> (raw)
In-Reply-To: <4efb25de-e036-4015-e764-70b4c911ca67@linux.vnet.ibm.com>



On 22/02/17 16:55, Anshuman Khandual wrote:
> On 02/22/2017 10:34 AM, Balbir Singh wrote:
>> On Fri, Feb 17, 2017 at 04:54:47PM +0530, Anshuman Khandual wrote:
>>> 	This patch series is base on the work posted by Zi Yan back in
>>> November 2016 (https://lkml.org/lkml/2016/11/22/457) but includes some
>>> amount clean up and re-organization. This series depends on THP migration
>>> optimization patch series posted by Naoya Horiguchi on 8th November 2016
>>> (https://lwn.net/Articles/705879/). Though Zi Yan has recently reposted
>>> V3 of the THP migration patch series (https://lwn.net/Articles/713667/),
>>> this series is yet to be rebased.
>>>
>>> 	Primary motivation behind this patch series is to achieve higher
>>> bandwidth of memory migration when ever possible using multi threaded
>>> instead of a single threaded copy. Did all the experiments using a two
>>> socket X86 sytsem (Intel(R) Xeon(R) CPU E5-2650). All the experiments
>>> here have same allocation size 4K * 100000 (which did not split evenly
>>> for the 2MB huge pages). Here are the results.
>>>
>>> Vanilla:
>>>
>>> Moved 100000 normal pages in 247.000000 msecs 1.544412 GBs
>>> Moved 100000 normal pages in 238.000000 msecs 1.602814 GBs
>>> Moved 195 huge pages in 252.000000 msecs 1.513769 GBs
>>> Moved 195 huge pages in 257.000000 msecs 1.484318 GBs
>>>
>>> THP migration improvements:
>>>
>>> Moved 100000 normal pages in 302.000000 msecs 1.263145 GBs
>>
>> Is there a decrease here for normal pages?
> 
> Yeah.
> 
>>
>>> Moved 100000 normal pages in 262.000000 msecs 1.455991 GBs
>>> Moved 195 huge pages in 120.000000 msecs 3.178914 GBs
>>> Moved 195 huge pages in 129.000000 msecs 2.957130 GBs
>>>
>>> THP migration improvements + Multi threaded page copy:
>>>
>>> Moved 100000 normal pages in 1589.000000 msecs 0.240069 GBs **
>>
>> Ditto?
> 
> Yeah, I have already mentioned about this after these data in
> the cover letter. This new flag is controlled from user space
> while invoking the system calls. Users should be careful in
> using it for scenarios where its useful and avoid it for cases
> where it hurts.

Fair enough, I wonder if _MT should be disabled for normal pages
and allow only THP migration. I think it might be worth evaluating
the overheads

> 
>>
>>> Moved 100000 normal pages in 1932.000000 msecs 0.197448 GBs **
>>> Moved 195 huge pages in 54.000000 msecs 7.064254 GBs ***
>>> Moved 195 huge pages in 86.000000 msecs 4.435694 GBs ***
>>>
>>
>> Could you also comment on the CPU utilization impact of these
>> patches. 
> 
> Yeah, it really makes sense to analyze this impact. I have mentioned
> about this in the outstanding issues section of the series. But what
> exactly we need to analyze from CPU utilization impact point of view
> ? Like whats the probability that the work queue requested jobs will
> throw some tasks from the run queue and make them starve for some
> more time ? Could you please give some details on this ?
> 

I wonder if the CPU utilization is so high that its hurting the CPU
(system time) at the cost of increased migration speeds. We may need
a trade-off (see my comment above)

Balbir Singh.

WARNING: multiple messages have this Message-ID (diff)
From: Balbir Singh <bsingharora@gmail.com>
To: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	mhocko@suse.com, vbabka@suse.cz, mgorman@suse.de,
	minchan@kernel.org, aneesh.kumar@linux.vnet.ibm.com,
	srikar@linux.vnet.ibm.com, haren@linux.vnet.ibm.com,
	jglisse@redhat.com, dave.hansen@intel.com,
	dan.j.williams@intel.com, zi.yan@cs.rutgers.edu
Subject: Re: [PATCH 0/6] Enable parallel page migration
Date: Wed, 22 Feb 2017 21:52:11 +1100	[thread overview]
Message-ID: <aaafb3b7-66db-36a2-7514-a826f295fead@gmail.com> (raw)
In-Reply-To: <4efb25de-e036-4015-e764-70b4c911ca67@linux.vnet.ibm.com>



On 22/02/17 16:55, Anshuman Khandual wrote:
> On 02/22/2017 10:34 AM, Balbir Singh wrote:
>> On Fri, Feb 17, 2017 at 04:54:47PM +0530, Anshuman Khandual wrote:
>>> 	This patch series is base on the work posted by Zi Yan back in
>>> November 2016 (https://lkml.org/lkml/2016/11/22/457) but includes some
>>> amount clean up and re-organization. This series depends on THP migration
>>> optimization patch series posted by Naoya Horiguchi on 8th November 2016
>>> (https://lwn.net/Articles/705879/). Though Zi Yan has recently reposted
>>> V3 of the THP migration patch series (https://lwn.net/Articles/713667/),
>>> this series is yet to be rebased.
>>>
>>> 	Primary motivation behind this patch series is to achieve higher
>>> bandwidth of memory migration when ever possible using multi threaded
>>> instead of a single threaded copy. Did all the experiments using a two
>>> socket X86 sytsem (Intel(R) Xeon(R) CPU E5-2650). All the experiments
>>> here have same allocation size 4K * 100000 (which did not split evenly
>>> for the 2MB huge pages). Here are the results.
>>>
>>> Vanilla:
>>>
>>> Moved 100000 normal pages in 247.000000 msecs 1.544412 GBs
>>> Moved 100000 normal pages in 238.000000 msecs 1.602814 GBs
>>> Moved 195 huge pages in 252.000000 msecs 1.513769 GBs
>>> Moved 195 huge pages in 257.000000 msecs 1.484318 GBs
>>>
>>> THP migration improvements:
>>>
>>> Moved 100000 normal pages in 302.000000 msecs 1.263145 GBs
>>
>> Is there a decrease here for normal pages?
> 
> Yeah.
> 
>>
>>> Moved 100000 normal pages in 262.000000 msecs 1.455991 GBs
>>> Moved 195 huge pages in 120.000000 msecs 3.178914 GBs
>>> Moved 195 huge pages in 129.000000 msecs 2.957130 GBs
>>>
>>> THP migration improvements + Multi threaded page copy:
>>>
>>> Moved 100000 normal pages in 1589.000000 msecs 0.240069 GBs **
>>
>> Ditto?
> 
> Yeah, I have already mentioned about this after these data in
> the cover letter. This new flag is controlled from user space
> while invoking the system calls. Users should be careful in
> using it for scenarios where its useful and avoid it for cases
> where it hurts.

Fair enough, I wonder if _MT should be disabled for normal pages
and allow only THP migration. I think it might be worth evaluating
the overheads

> 
>>
>>> Moved 100000 normal pages in 1932.000000 msecs 0.197448 GBs **
>>> Moved 195 huge pages in 54.000000 msecs 7.064254 GBs ***
>>> Moved 195 huge pages in 86.000000 msecs 4.435694 GBs ***
>>>
>>
>> Could you also comment on the CPU utilization impact of these
>> patches. 
> 
> Yeah, it really makes sense to analyze this impact. I have mentioned
> about this in the outstanding issues section of the series. But what
> exactly we need to analyze from CPU utilization impact point of view
> ? Like whats the probability that the work queue requested jobs will
> throw some tasks from the run queue and make them starve for some
> more time ? Could you please give some details on this ?
> 

I wonder if the CPU utilization is so high that its hurting the CPU
(system time) at the cost of increased migration speeds. We may need
a trade-off (see my comment above)

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-02-22 10:52 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-17 11:24 [PATCH 0/6] Enable parallel page migration Anshuman Khandual
2017-02-17 11:24 ` Anshuman Khandual
2017-02-17 11:24 ` [PATCH 1/6] mm/migrate: Add new mode parameter to migrate_page_copy() function Anshuman Khandual
2017-02-17 11:24   ` Anshuman Khandual
2017-03-09  6:24   ` Anshuman Khandual
2017-03-09  6:24     ` Anshuman Khandual
2017-02-17 11:24 ` [PATCH 2/6] mm/migrate: Make migrate_mode types non-exclusive Anshuman Khandual
2017-02-17 11:24   ` Anshuman Khandual
2017-02-17 11:24 ` [PATCH 3/6] mm/migrate: Add copy_pages_mthread function Anshuman Khandual
2017-02-17 11:24   ` Anshuman Khandual
2017-02-17 12:27   ` kbuild test robot
2017-03-08 15:40     ` Anshuman Khandual
2017-03-08 15:40       ` Anshuman Khandual
2017-03-09  6:25   ` Anshuman Khandual
2017-03-09  6:25     ` Anshuman Khandual
2017-02-17 11:24 ` [PATCH 4/6] mm/migrate: Add new migrate mode MIGRATE_MT Anshuman Khandual
2017-02-17 11:24   ` Anshuman Khandual
2017-02-17 11:24 ` [PATCH 5/6] mm/migrate: Add new migration flag MPOL_MF_MOVE_MT for syscalls Anshuman Khandual
2017-02-17 11:24   ` Anshuman Khandual
2017-03-09  6:26   ` Anshuman Khandual
2017-03-09  6:26     ` Anshuman Khandual
2017-02-17 11:24 ` [PATCH 6/6] sysctl: Add global tunable mt_page_copy Anshuman Khandual
2017-02-17 11:24   ` Anshuman Khandual
2017-02-17 15:30   ` kbuild test robot
2017-03-08 15:37     ` Anshuman Khandual
2017-03-08 15:37       ` Anshuman Khandual
2017-03-10  1:12       ` [kbuild-all] " Ye Xiaolong
2017-03-10  1:12         ` Ye Xiaolong
2017-03-10 12:11         ` Anshuman Khandual
2017-03-10 12:11           ` Anshuman Khandual
2017-02-22  5:04 ` [PATCH 0/6] Enable parallel page migration Balbir Singh
2017-02-22  5:04   ` Balbir Singh
2017-02-22  5:55   ` Anshuman Khandual
2017-02-22  5:55     ` Anshuman Khandual
2017-02-22 10:52     ` Balbir Singh [this message]
2017-02-22 10:52       ` Balbir Singh
2017-03-08 16:04 ` Anshuman Khandual
2017-03-08 16:04   ` Anshuman Khandual
2017-03-09 15:09   ` Mel Gorman
2017-03-09 15:09     ` Mel Gorman
2017-03-09 17:38     ` David Nellans
2017-03-09 17:38       ` David Nellans
2017-03-09 22:15       ` Mel Gorman
2017-03-09 22:15         ` Mel Gorman
2017-03-09 23:46         ` Zi Yan
2017-03-09 23:46           ` Zi Yan
2017-03-10 14:07           ` Mel Gorman
2017-03-10 14:07             ` Mel Gorman
2017-03-10 14:45             ` Michal Hocko
2017-03-10 14:45               ` Michal Hocko
2017-03-10 13:05     ` Michal Hocko
2017-03-10 13:05       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aaafb3b7-66db-36a2-7514-a826f295fead@gmail.com \
    --to=bsingharora@gmail.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=haren@linux.vnet.ibm.com \
    --cc=jglisse@redhat.com \
    --cc=khandual@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=vbabka@suse.cz \
    --cc=zi.yan@cs.rutgers.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.