KVM Archive on lore.kernel.org
 help / color / Atom feed
From: Nitesh Narayan Lal <nitesh@redhat.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: kvm list <kvm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	lcapitulino@redhat.com, pagupta@redhat.com, wei.w.wang@intel.com,
	Yang Zhang <yang.zhang.wz@gmail.com>,
	Rik van Riel <riel@surriel.com>,
	David Hildenbrand <david@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	dodgen@google.com, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	dhildenb@redhat.com, Andrea Arcangeli <aarcange@redhat.com>,
	john.starks@microsoft.com, Dave Hansen <dave.hansen@intel.com>,
	Michal Hocko <mhocko@suse.com>
Subject: Re: [RFC][Patch v11 1/2] mm: page_hinting: core infrastructure
Date: Fri, 12 Jul 2019 12:25:17 -0400
Message-ID: <89c65447-f876-9aac-957f-85f30ca43b2f@redhat.com> (raw)
In-Reply-To: <CAKgT0Uf7xsdh9OgBq-kyTkyvh8Qo9kV4uiWTVP7NKqzO4X0wyg@mail.gmail.com>

On 7/12/19 12:22 PM, Alexander Duyck wrote:
> On Thu, Jul 11, 2019 at 6:13 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
>> On 7/11/19 7:20 PM, Alexander Duyck wrote:
>>> On Thu, Jul 11, 2019 at 10:58 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
>>>> On 7/10/19 5:56 PM, Alexander Duyck wrote:
>>>>> On Wed, Jul 10, 2019 at 12:52 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
>>>>>> This patch introduces the core infrastructure for free page hinting in
>>>>>> virtual environments. It enables the kernel to track the free pages which
>>>>>> can be reported to its hypervisor so that the hypervisor could
>>>>>> free and reuse that memory as per its requirement.
>>>>>> While the pages are getting processed in the hypervisor (e.g.,
>>>>>> via MADV_FREE), the guest must not use them, otherwise, data loss
>>>>>> would be possible. To avoid such a situation, these pages are
>>>>>> temporarily removed from the buddy. The amount of pages removed
>>>>>> temporarily from the buddy is governed by the backend(virtio-balloon
>>>>>> in our case).
>>>>>> To efficiently identify free pages that can to be hinted to the
>>>>>> hypervisor, bitmaps in a coarse granularity are used. Only fairly big
>>>>>> chunks are reported to the hypervisor - especially, to not break up THP
>>>>>> in the hypervisor - "MAX_ORDER - 2" on x86, and to save space. The bits
>>>>>> in the bitmap are an indication whether a page *might* be free, not a
>>>>>> guarantee. A new hook after buddy merging sets the bits.
>>>>>> Bitmaps are stored per zone, protected by the zone lock. A workqueue
>>>>>> asynchronously processes the bitmaps, trying to isolate and report pages
>>>>>> that are still free. The backend (virtio-balloon) is responsible for
>>>>>> reporting these batched pages to the host synchronously. Once reporting/
>>>>>> freeing is complete, isolated pages are returned back to the buddy.
>>>>>> There are still various things to look into (e.g., memory hotplug, more
>>>>>> efficient locking, possible races when disabling).
>>>>>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
>>> So just FYI, I thought I would try the patches. It looks like there
>>> might be a bug somewhere that is causing it to free memory it
>>> shouldn't be. After about 10 minutes my VM crashed with a system log
>>> full of various NULL pointer dereferences.
>> That's interesting, I have tried the patches with MADV_DONTNEED as well.
>> I just retried it but didn't see any crash. May I know what kind of
>> workload you are running?
> I was running the page_fault1 test on a VM with 80G of memory.
>>>  The only change I had made
>>> is to use MADV_DONTNEED instead of MADV_FREE in QEMU since my headers
>>> didn't have MADV_FREE on the host. It occurs to me one advantage of
>>> MADV_DONTNEED over MADV_FREE is that you are more likely to catch
>>> these sort of errors since it zeros the pages instead of leaving them
>>> intact.
>> For development purpose maybe. For the final patch-set I think we
>> discussed earlier why we should keep MADV_FREE.
> I'm still not convinced MADV_FREE is a net win, at least for
> performance. You are still paying the cost for the VMEXIT in order to
> regain ownership of the page. In the case that you are under memory
> pressure it is essentially equivalent to MADV_DONTNEED. Also it
> doesn't really do much to help with the memory footprint of the VM
> itself. With the MADV_DONTNEED the pages are freed back and you have a
> greater liklihood of reducing the overall memory footprint of the
> entire system since you would be more likely to be assigned pages that
> were recently used rather than having to access a cold page.	
> <snip>
>>>>>> +void page_hinting_enqueue(struct page *page, int order)
>>>>>> +{
>>>>>> +       int zone_idx;
>>>>>> +
>>>>>> +       if (!page_hitning_conf || order < PAGE_HINTING_MIN_ORDER)
>>>>>> +               return;
>>>>> I would think it is going to be expensive to be jumping into this
>>>>> function for every freed page. You should probably have an inline
>>>>> taking care of the order check before you even get here since it would
>>>>> be faster that way.
>>>> I see, I can take a look. Thanks.
>>>>>> +
>>>>>> +       bm_set_pfn(page);
>>>>>> +       if (atomic_read(&page_hinting_active))
>>>>>> +               return;
>>>>> So I would think this piece is racy. Specifically if you set a PFN
>>>>> that is somewhere below the PFN you are currently processing in your
>>>>> scan it is going to remain unset until you have another page freed
>>>>> after the scan is completed. I would worry you can end up with a batch
>>>>> free of memory resulting in a group of pages sitting at the start of
>>>>> your bitmap unhinted.
>>>> True, but that will be hinted next time threshold is met.
>>> Yes, but that assumes that there is another free immediately coming.
>>> It is possible that you have a big application run and then
>>> immediately shut down and have it free all its memory at once. Worst
>>> case scenario would be that it starts by freeing from the end and
>>> works toward the start. With that you could theoretically end up with
>>> a significant chunk of memory waiting some time for another big free
>>> to come along.
>> Any suggestion on some benchmark/test application which I could run to
>> see this kind of behavior?
> Like I mentioned before, try doing a VM with a bigger memory
> footprint. You could probably just do a stack of VMs like what we were
> doing with the memhog test. Basically the longer it takes to process
> all the pages the greater the liklihood that there are still pages
> left when they are freed.
Thanks. Before next posting I will make sure to test with a larger VM

  reply index

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-10 19:51 [RFC][PATCH v11 0/2] mm: Support for page hinting Nitesh Narayan Lal
2019-07-10 19:51 ` [RFC][Patch v11 1/2] mm: page_hinting: core infrastructure Nitesh Narayan Lal
2019-07-10 20:45   ` Dave Hansen
2019-07-11 11:48     ` Nitesh Narayan Lal
2019-07-11 15:25     ` Nitesh Narayan Lal
2019-07-11 15:50       ` Nitesh Narayan Lal
2019-07-11 16:22       ` Dave Hansen
2019-07-11 16:36         ` Nitesh Narayan Lal
2019-07-11 16:45           ` Dave Hansen
2019-07-11 16:52             ` Nitesh Narayan Lal
2019-07-15  9:26     ` David Hildenbrand
2019-07-10 21:56   ` Alexander Duyck
2019-07-11 17:58     ` Nitesh Narayan Lal
2019-07-11 23:20       ` Alexander Duyck
2019-07-12  1:12         ` Nitesh Narayan Lal
2019-07-12 16:22           ` Alexander Duyck
2019-07-12 16:25             ` Nitesh Narayan Lal [this message]
2019-07-11 18:21   ` Dave Hansen
2019-07-15  9:33     ` David Hildenbrand
2019-07-15 14:40       ` David Hildenbrand
2019-07-10 19:51 ` [RFC][Patch v11 2/2] virtio-balloon: page_hinting: reporting to the host Nitesh Narayan Lal
2019-07-10 19:53 ` [QEMU Patch] virtio-baloon: Support for page hinting Nitesh Narayan Lal
2019-07-10 20:17   ` Alexander Duyck
2019-07-11 12:03     ` Nitesh Narayan Lal
2019-07-11  8:49   ` Cornelia Huck
2019-07-11 11:13     ` Nitesh Narayan Lal
2019-07-11 18:55   ` Michael S. Tsirkin
2019-07-11 19:06     ` Nitesh Narayan Lal
2019-07-11 22:36       ` Alexander Duyck
2019-07-10 20:19 ` [RFC][PATCH v11 0/2] mm: " Dave Hansen
2019-07-11 11:37   ` Nitesh Narayan Lal
2019-07-10 23:40 ` Alexander Duyck
2019-07-11 11:30   ` Nitesh Narayan Lal
2019-07-11 14:58     ` Alexander Duyck
2019-07-11 15:03       ` Nitesh Narayan Lal
2019-07-11 15:08         ` Alexander Duyck
2019-07-11 15:19           ` Nitesh Narayan Lal
2019-07-11 17:01             ` Alexander Duyck

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=89c65447-f876-9aac-957f-85f30ca43b2f@redhat.com \
    --to=nitesh@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=alexander.duyck@gmail.com \
    --cc=dave.hansen@intel.com \
    --cc=david@redhat.com \
    --cc=dhildenb@redhat.com \
    --cc=dodgen@google.com \
    --cc=john.starks@microsoft.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=lcapitulino@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mst@redhat.com \
    --cc=pagupta@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=riel@surriel.com \
    --cc=wei.w.wang@intel.com \
    --cc=yang.zhang.wz@gmail.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org kvm@archiver.kernel.org
	public-inbox-index kvm

Newsgroup available over NNTP:

AGPL code for this site: git clone https://public-inbox.org/ public-inbox