All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Michal Hocko <mhocko@kernel.org>, "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Andi Kleen <andi.kleen@intel.com>, Jan Kara <jack@suse.cz>,
	Matthew Wilcox <mawilcox@microsoft.com>,
	Hugh Dickins <hughd@google.com>, Minchan Kim <minchan@kernel.org>,
	Shaohua Li <shli@fb.com>, Christopher Lameter <cl@linux.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>,
	Punit Agrawal <punit.agrawal@arm.com>,
	Anshuman Khandual <khandual@linux.vnet.ibm.com>
Subject: Re: [PATCH -mm] mm, hugetlb: Pass fault address to no page handler
Date: Wed, 16 May 2018 13:04:31 -0700	[thread overview]
Message-ID: <c94f7180-d49b-3a9d-8d9e-002642ee9f3b@oracle.com> (raw)
In-Reply-To: <20180516091226.GM12670@dhcp22.suse.cz>

On 05/16/2018 02:12 AM, Michal Hocko wrote:
> On Tue 15-05-18 08:57:56, Huang, Ying wrote:
>> From: Huang Ying <ying.huang@intel.com>
>>
>> This is to take better advantage of huge page clearing
>> optimization (c79b57e462b5d, "mm: hugetlb: clear target sub-page last
>> when clearing huge page").  Which will clear to access sub-page last
>> to avoid the cache lines of to access sub-page to be evicted when
>> clearing other sub-pages.  This needs to get the address of the
>> sub-page to access, that is, the fault address inside of the huge
>> page.  So the hugetlb no page fault handler is changed to pass that
>> information.  This will benefit workloads which don't access the begin
>> of the huge page after page fault.
>>
>> With this patch, the throughput increases ~28.1% in vm-scalability
>> anon-w-seq test case with 88 processes on a 2 socket Xeon E5 2699 v4
>> system (44 cores, 88 threads).  The test case creates 88 processes,
>> each process mmap a big anonymous memory area and writes to it from
>> the end to the begin.  For each process, other processes could be seen
>> as other workload which generates heavy cache pressure.  At the same
>> time, the cache miss rate reduced from ~36.3% to ~25.6%, the
>> IPC (instruction per cycle) increased from 0.3 to 0.37, and the time
>> spent in user space is reduced ~19.3%
> 
> This paragraph is confusing as Mike mentioned already. It would be
> probably more helpful to see how was the test configured to use hugetlb
> pages and what is the end benefit.
> 
> I do not have any real objection to the implementation so feel free to
> add
> Acked-by: Michal Hocko <mhocko@suse.com>
> I am just wondering what is the usecase driving this. Or is it just a
> generic optimization that always makes sense to do? Indicating that in
> the changelog would be helpful as well.

I just noticed that the optimization was not added for 'gigantic' pages.
Should we consider adding support for gigantic pages as well?  It may be
that the cache miss cost is insignificant when added to the time required
to clear a 1GB (for x86) gigantic page.

One more thing, I'm guessing the copy_huge/gigantic_page() routines would
see a similar benefit.  Specifically, for copies as a result of a COW.
Is that another area to consider?

That gets back to Michal's question of a specific use case or generic
optimization.  Unless code is simple (as in this patch), seems like we should
hold off on considering additional optimizations unless there is a specific
use case.

I'm still OK with this change.
-- 
Mike Kravetz

  reply	other threads:[~2018-05-16 20:06 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-15  0:57 [PATCH -mm] mm, hugetlb: Pass fault address to no page handler Huang, Ying
2018-05-15  3:25 ` Mike Kravetz
2018-05-15  5:19   ` Huang, Ying
2018-05-15 10:38 ` Kirill A. Shutemov
2018-05-16  0:42   ` Huang, Ying
2018-05-16  8:03     ` Kirill A. Shutemov
2018-05-17  1:39       ` Huang, Ying
2018-05-15 20:03 ` David Rientjes
2018-05-16  9:12 ` Michal Hocko
2018-05-16 20:04   ` Mike Kravetz [this message]
2018-05-17  1:45     ` Huang, Ying
2018-05-17  1:41   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c94f7180-d49b-3a9d-8d9e-002642ee9f3b@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi.kleen@intel.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=cl@linux.com \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=khandual@linux.vnet.ibm.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mawilcox@microsoft.com \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=punit.agrawal@arm.com \
    --cc=shli@fb.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.