linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: Nitesh Narayan Lal <nitesh@redhat.com>,
	Alexander Duyck <alexander.duyck@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Andrea Arcangeli <aarcange@redhat.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	 David Hildenbrand <david@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	 lcapitulino@redhat.com, Mel Gorman <mgorman@techsingularity.net>,
	 mm-commits@vger.kernel.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	Pankaj Gupta <pagupta@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	 Rik van Riel <riel@surriel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	"Wang, Wei W" <wei.w.wang@intel.com>,
	 Matthew Wilcox <willy@infradead.org>,
	Yang Zhang <yang.zhang.wz@gmail.com>,
	linux-mm <linux-mm@kvack.org>
Subject: Re: + mm-introduce-reported-pages.patch added to -mm tree
Date: Tue, 12 Nov 2019 08:18:45 -0800	[thread overview]
Message-ID: <f9ac8e16ec2c6b814a43d6df802fa40a0be56743.camel@linux.intel.com> (raw)
In-Reply-To: <c1badd0e-c5cb-0e3c-e9c1-102962b152cc@redhat.com>

On Tue, 2019-11-12 at 10:19 -0500, Nitesh Narayan Lal wrote:
> On 11/11/19 5:00 PM, Alexander Duyck wrote:
> > On Mon, Nov 11, 2019 at 10:52 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
> > > On 11/6/19 7:16 AM, Michal Hocko wrote:
> > > > I didn't have time to read through newer versions of this patch series
> > > > but I remember there were concerns about this functionality being pulled
> > > > into the page allocator previously both by me and Mel [1][2]. Have those been
> > > > addressed? I do not see an ack from Mel or any other MM people. Is there
> > > > really a consensus that we want something like that living in the
> > > > allocator?
> > > > 
> > > > There has also been a different approach discussed and from [3]
> > > > (referenced by the cover letter) I can only see
> > > > 
> > > > : Then Nitesh's solution had changed to the bitmap approach[7]. However it
> > > > : has been pointed out that this solution doesn't deal with sparse memory,
> > > > : hotplug, and various other issues.
> > > > 
> > > > which looks more like something to be done than a fundamental
> > > > roadblocks.
> > > > 
> > > > [1] http://lkml.kernel.org/r/20190912163525.GV2739@techsingularity.net
> > > > [2] http://lkml.kernel.org/r/20190912091925.GM4023@dhcp22.suse.cz
> > > > [3] http://lkml.kernel.org/r/29f43d5796feed0dec8e8bb98b187d9dac03b900.camel@linux.intel.com
> > > > 
> > > [...]
> > > 
> > > Hi,
> > > 
> > > I performed some experiments to find the root cause for the performance
> > > degradation Alexander reported with my v12 patch-set. [1]
> > > 
> > > I will try to give a brief background of the previous discussion
> > > under v12: (Alexander can correct me if I am missing something).
> > > Alexander suggested two issues with my v12 posting: [2]
> > > (This is excluding the sparse zone and memory hotplug/hotremove support)
> > > 
> > > - A crash which was caused because I was not using spinlock_irqsave()
> > >   (Fix suggestion came from Alexander).
> > > 
> > > - Performance degradation with Alexander's suggested setup. Where we are using
> > >   modified will-it-scale/page_fault with THP, CONFIG_SLAB_FREELIST_RANDOM &
> > >   CONFIG_SHUFFLE_PAGE_ALLOCATOR. When I was using (MAX_ORDER - 2) as the
> > >   PAGE_REPORTING_MIN_ORDER, I also observed significant performance degradation
> > >   (around 20% in the number of threads launched on the 16th vCPU). However, on
> > >   switching the PAGE_REPORTING_MIN_ORDER to (MAX_ORDER - 1), I was able to get
> > >   the performance similar to what Alexander is reporting.
> > > 
> > > PAGE_REPORTING_MIN_ORDER: is the minimum order of a page to be captured in the
> > > bitmap and get reported to the hypervisor.
> > > 
> > > For the discussion where we are comparing the two series, the performance
> > > aspect is more relevant and important.
> > > It turns out that with the current implementation the number of vmexit with
> > > PAGE_REPORTING_MIN_ORDER as pageblock_order or (MAX_ORDER - 2) are significantly
> > > large when compared to (MAX_ODER - 1).
> > > 
> > > One of the reason could be that the lower order pages are not getting sufficient
> > > time to merge with each other as a result they are somehow getting reported
> > > with 2 separate reporting requests. Hence, generating more vmexits. Where
> > > as with (MAX_ORDER - 1) we don't have that kind of situation as I never try
> > > to report any page which has order < (MAX_ORDER - 1).
> > > 
> > > To fix this, I might have to further limit the reporting which could allow the
> > > lower order pages to further merge and hence reduce the VM exits. I will try to
> > > do some experiments to see if I can fix this. In any case, if anyone has a
> > > suggestion I would be more than happy to look in that direction.
> > That doesn't make any sense. My setup using MAX_ORDER - 2, aka
> > pageblock_order, as the limit doesn't experience the same performance
> > issues the bitmap solution does. That leads me to believe the issue
> > isn't that the pages have not had a chance to be merged.
> > 
> 
> So, I did run your series as well with a few syfs variables to see how many
> pages of order (MAX_ORDER - 1) or (MAX_ORDER - 2) are reported at the end of
> will-it-scale/page_fault4 test.
> What I observed is the number of (MAX_ORDER - 2) pages which were getting
> reported in your case were lesser than what has been reported in mine with
> pageblock_order.
> As you have mentioned below about putting pages in a certain part of the
> free list might have also an impact.

Another thing you may want to check is how often your notifier is
triggering. One thing I did was to intentionally put a fairly significant
delay from the time the notification is scheduled to when it will start. I
did this because when an application is freeing memory it will take some
time to completely free it, and if it is going to reallocate it anyway
there is no need to rush since it would just invalidate the pages you
reported anyway.

> > > Following are the numbers I gathered on a 30GB single NUMA, 16 vCPU guest
> > > affined to a single host-NUMA:
> > > 
> > > On 16th vCPU:
> > > With PAGE_REPORTING_MIN_ORDER as (MAX_ORDER - 1):
> > > % Dip on the number of Processes = 1.3 %
> > > % Dip on the number of  Threads  = 5.7 %
> > > 
> > > With PAGE_REPORTING_MIN_ORDER as With (pageblock_order):
> > > % Dip on the number of Processes = 5 %
> > > % Dip on the number of  Threads  = 20 %
> > So I don't hold much faith in the threads numbers. I have seen the
> > variability be as high as 14% between runs.
> 
> That's interesting. Do you see the variability even with an unmodified kernel?
> Somehow, for me it seems pretty consistent. However, if you are running with
> multiple NUMA nodes it might have a significant impact on the numbers.
> 
> For now, I am only running a single NUMA guest affined to a single NUMA
> of host.

My guest should be running in a single node, and yes I saw it with just
the unmodified kernel. I am running on the linux-next 20191031 kernel. It
did occur to me that it seems like the performance for the threads number
recently increased. There might be a guest config option impacting things
as well since I know I have changed a number of variables since then.

> > > Michal's suggestion:
> > > I was able to get the prototype which could use page-isolation API:
> > > start_isolate_page_range()/undo_isolate_page_range() to work.
> > > But the issue mentioned above was also evident with it.
> > > 
> > > Hence, I think before moving to the decision whether I want to use
> > > __isolate_free_page() which isolates pages from the buddy or
> > > start/undo_isolate_page_range() which just marks the page as MIGRATE_ISOLATE,
> > > it is important for me to resolve the above-mentioned issue.
> > I'd be curious how you are avoiding causing memory starvation if you
> > are isolating ranges of memory that have been recently freed.
> 
> I would still be marking only 32 pages as MIGRATE_ISOLATE at a time. It is
> exactly same as that of isolating limited chunk of pages from the buddy.
> For example if I have a pfn:x of order y then I pass
> start_isolate_page_range(x, x+y, mt, 0). So at the end we
> will have 32 such entries marked as MIGRATE_ISOLATE.

I get that you are isolating the same amount of memory. What I was getting
at is that __isolate_free_page has a check in it to make certain you are
not pulling memory that would put you below the minimum watermark. As far
as I know there isn't anything like that for the page isolation framework
since it is normally used for offlining memory before it is hotplugged
away.

> > > Previous discussions:
> > > More about how we ended up with these two approaches could be found at [3] &
> > > [4] explained by Alexander & David.
> > > 
> > > [1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@redhat.com/
> > > [2] https://lkml.org/lkml/2019/10/2/425
> > > [3] https://lkml.org/lkml/2019/10/23/1166
> > > [4] https://lkml.org/lkml/2019/9/12/48
> > > 
> > So one thing you may want to consider would be how placement of the
> > buffers will impact your performance.
> > 
> > One thing I realized I was doing wrong with my approach was scanning
> > for pages starting at the tail and then working up. It greatly hurt
> > the efficiency of my search since in the standard case most of the
> > free memory will be placed at the head and only with shuffling enabled
> > do I really need to worry about things getting mixed up with the tail.
> > 
> > I suspect you may be similarly making things more difficult for
> > yourself by placing the reported pages back on the head of the list
> > instead of placing them at the tail where they will not be reallocated
> > immediately.
> 
> hmm, I see. I will try and explore this.
> 




  reply	other threads:[~2019-11-12 16:18 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20191106000547.juQRi83gi%akpm@linux-foundation.org>
2019-11-06 12:16 ` + mm-introduce-reported-pages.patch added to -mm tree Michal Hocko
2019-11-06 14:09   ` David Hildenbrand
2019-11-06 16:35     ` Alexander Duyck
2019-11-06 16:54       ` Michal Hocko
2019-11-06 17:48         ` Alexander Duyck
2019-11-06 22:11           ` Mel Gorman
2019-11-06 23:38             ` David Hildenbrand
2019-11-07  0:20             ` Alexander Duyck
2019-11-07 10:20               ` Mel Gorman
2019-11-07 16:07                 ` Alexander Duyck
2019-11-08  9:43                   ` Mel Gorman
2019-11-08 16:17                     ` Alexander Duyck
2019-11-08 18:41                       ` Mel Gorman
2019-11-08 20:29                         ` Alexander Duyck
2019-11-09 14:57                           ` Mel Gorman
2019-11-10 18:03                             ` Alexander Duyck
2019-11-06 23:33           ` David Hildenbrand
2019-11-07  0:20             ` Dave Hansen
2019-11-07  0:52               ` David Hildenbrand
2019-11-07 17:12                 ` Dave Hansen
2019-11-07 17:46                   ` Michal Hocko
2019-11-07 18:08                     ` Dave Hansen
2019-11-07 18:12                     ` Alexander Duyck
2019-11-08  9:57                       ` Michal Hocko
2019-11-08 16:43                         ` Alexander Duyck
2019-11-07 18:46                   ` Qian Cai
2019-11-07 18:02             ` Alexander Duyck
2019-11-07 19:37               ` Nitesh Narayan Lal
2019-11-07 22:46                 ` Alexander Duyck
2019-11-07 22:43               ` David Hildenbrand
2019-11-08  0:42                 ` Alexander Duyck
2019-11-08  7:06                   ` David Hildenbrand
2019-11-08 17:18                     ` Alexander Duyck
2019-11-12 13:04                       ` David Hildenbrand
2019-11-12 18:34                         ` Alexander Duyck
2019-11-12 21:05                           ` David Hildenbrand
2019-11-12 22:17                             ` David Hildenbrand
2019-11-12 22:19                             ` Alexander Duyck
2019-11-12 23:10                               ` David Hildenbrand
2019-11-13  0:31                                 ` Alexander Duyck
2019-11-13 18:51                           ` Nitesh Narayan Lal
2019-11-06 16:49   ` Nitesh Narayan Lal
2019-11-11 18:52   ` Nitesh Narayan Lal
2019-11-11 22:00     ` Alexander Duyck
2019-11-12 15:19       ` Nitesh Narayan Lal
2019-11-12 16:18         ` Alexander Duyck [this message]
2019-11-13 18:39           ` Nitesh Narayan Lal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f9ac8e16ec2c6b814a43d6df802fa40a0be56743.camel@linux.intel.com \
    --to=alexander.h.duyck@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=david@redhat.com \
    --cc=konrad.wilk@oracle.com \
    --cc=lcapitulino@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=nitesh@redhat.com \
    --cc=osalvador@suse.de \
    --cc=pagupta@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=riel@surriel.com \
    --cc=vbabka@suse.cz \
    --cc=wei.w.wang@intel.com \
    --cc=willy@infradead.org \
    --cc=yang.zhang.wz@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).