From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751135AbdGMAdV (ORCPT ); Wed, 12 Jul 2017 20:33:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52254 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750775AbdGMAdU (ORCPT ); Wed, 12 Jul 2017 20:33:20 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 990015116B Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=mst@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 990015116B Date: Thu, 13 Jul 2017 03:33:13 +0300 From: "Michael S. Tsirkin" To: Wei Wang Cc: linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, liliang.opensource@gmail.com, virtio-dev@lists.oasis-open.org, yang.zhang.wz@gmail.com, quan.xu@aliyun.com Subject: Re: [PATCH v12 6/8] mm: support reporting free page blocks Message-ID: <20170713032314-mutt-send-email-mst@kernel.org> References: <1499863221-16206-1-git-send-email-wei.w.wang@intel.com> <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 13 Jul 2017 00:33:19 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 12, 2017 at 08:40:19PM +0800, Wei Wang wrote: > This patch adds support for reporting blocks of pages on the free list > specified by the caller. > > As pages can leave the free list during this call or immediately > afterwards, they are not guaranteed to be free after the function > returns. The only guarantee this makes is that the page was on the free > list at some point in time after the function has been invoked. > > Therefore, it is not safe for caller to use any pages on the returned > block or to discard data that is put there after the function returns. > However, it is safe for caller to discard data that was in one of these > pages before the function was invoked. > > Signed-off-by: Wei Wang > Signed-off-by: Liang Li > --- > include/linux/mm.h | 5 +++ > mm/page_alloc.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 101 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 46b9ac5..76cb433 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1835,6 +1835,11 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, > + struct page **page); > +#endif > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 64b7d82..8b3c9dd 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4753,6 +4753,102 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) > show_swap_cache_info(); > } > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > + > +/* > + * Heuristically get a page block in the system that is unused. > + * It is possible that pages from the page block are used immediately after > + * report_unused_page_block() returns. It is the caller's responsibility > + * to either detect or prevent the use of such pages. > + * > + * The free list to check: zone->free_area[order].free_list[migratetype]. > + * > + * If the caller supplied page block (i.e. **page) is on the free list, offer > + * the next page block on the list to the caller. Otherwise, offer the first > + * page block on the list. > + * > + * Note: it is not safe for caller to use any pages on the returned > + * block or to discard data that is put there after the function returns. > + * However, it is safe for caller to discard data that was in one of these > + * pages before the function was invoked. > + * > + * Return 0 when a page block is found on the caller specified free list. Otherwise? > + */ As an alternative, we could have an API that scans free pages and invokes a callback under a lock. Granted, this might end up staying a lot of time under a lock. Is this a big issue? Some benchmarking will tell. It would then be up to the hypervisor to decide whether it wants to play tricks with the dirty bit or just wants to drop pages while VCPU is stopped. > +int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, struct page **page) > +{ > + struct zone *this_zone; > + struct list_head *this_list; > + int ret = 0; > + unsigned long flags; > + > + /* Sanity check */ > + if (zone == NULL || page == NULL || order >= MAX_ORDER || > + migratetype >= MIGRATE_TYPES) > + return -EINVAL; Why do callers this? > + > + /* Zone validity check */ > + for_each_populated_zone(this_zone) { > + if (zone == this_zone) > + break; > + } Why? Will take a long time if there are lots of zones. > + > + /* Got a non-existent zone from the caller? */ > + if (zone != this_zone) > + return -EINVAL; When does this happen? > + > + spin_lock_irqsave(&this_zone->lock, flags); > + > + this_list = &zone->free_area[order].free_list[migratetype]; > + if (list_empty(this_list)) { > + *page = NULL; > + ret = 1; What does this mean? > + goto out; > + } > + > + /* The caller is asking for the first free page block on the list */ > + if ((*page) == NULL) { if (!*page) is shorter and prettier. > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller is not on this free list > + * anymore (e.g. a 1MB free page block has been split). In this case, > + * offer the first page block on the free list that the caller is > + * asking for. This just might keep giving you same block over and over again. E.g. - get 1st block - get 2nd block - 2nd gets broken up - get 1st block again this way we might never make progress beyond the 1st 2 blocks > + */ > + if (PageBuddy(*page) && order != page_order(*page)) { > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller has been the last page block > + * on the list. > + */ > + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { > zoneref->zone = zone; > -- > 2.7.4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f197.google.com (mail-qt0-f197.google.com [209.85.216.197]) by kanga.kvack.org (Postfix) with ESMTP id 7DF9A440874 for ; Wed, 12 Jul 2017 20:33:21 -0400 (EDT) Received: by mail-qt0-f197.google.com with SMTP id 19so14870327qty.2 for ; Wed, 12 Jul 2017 17:33:21 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id c38si3746332qtc.315.2017.07.12.17.33.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jul 2017 17:33:20 -0700 (PDT) Date: Thu, 13 Jul 2017 03:33:13 +0300 From: "Michael S. Tsirkin" Subject: Re: [PATCH v12 6/8] mm: support reporting free page blocks Message-ID: <20170713032314-mutt-send-email-mst@kernel.org> References: <1499863221-16206-1-git-send-email-wei.w.wang@intel.com> <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Wei Wang Cc: linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, liliang.opensource@gmail.com, virtio-dev@lists.oasis-open.org, yang.zhang.wz@gmail.com, quan.xu@aliyun.com On Wed, Jul 12, 2017 at 08:40:19PM +0800, Wei Wang wrote: > This patch adds support for reporting blocks of pages on the free list > specified by the caller. > > As pages can leave the free list during this call or immediately > afterwards, they are not guaranteed to be free after the function > returns. The only guarantee this makes is that the page was on the free > list at some point in time after the function has been invoked. > > Therefore, it is not safe for caller to use any pages on the returned > block or to discard data that is put there after the function returns. > However, it is safe for caller to discard data that was in one of these > pages before the function was invoked. > > Signed-off-by: Wei Wang > Signed-off-by: Liang Li > --- > include/linux/mm.h | 5 +++ > mm/page_alloc.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 101 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 46b9ac5..76cb433 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1835,6 +1835,11 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, > + struct page **page); > +#endif > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 64b7d82..8b3c9dd 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4753,6 +4753,102 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) > show_swap_cache_info(); > } > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > + > +/* > + * Heuristically get a page block in the system that is unused. > + * It is possible that pages from the page block are used immediately after > + * report_unused_page_block() returns. It is the caller's responsibility > + * to either detect or prevent the use of such pages. > + * > + * The free list to check: zone->free_area[order].free_list[migratetype]. > + * > + * If the caller supplied page block (i.e. **page) is on the free list, offer > + * the next page block on the list to the caller. Otherwise, offer the first > + * page block on the list. > + * > + * Note: it is not safe for caller to use any pages on the returned > + * block or to discard data that is put there after the function returns. > + * However, it is safe for caller to discard data that was in one of these > + * pages before the function was invoked. > + * > + * Return 0 when a page block is found on the caller specified free list. Otherwise? > + */ As an alternative, we could have an API that scans free pages and invokes a callback under a lock. Granted, this might end up staying a lot of time under a lock. Is this a big issue? Some benchmarking will tell. It would then be up to the hypervisor to decide whether it wants to play tricks with the dirty bit or just wants to drop pages while VCPU is stopped. > +int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, struct page **page) > +{ > + struct zone *this_zone; > + struct list_head *this_list; > + int ret = 0; > + unsigned long flags; > + > + /* Sanity check */ > + if (zone == NULL || page == NULL || order >= MAX_ORDER || > + migratetype >= MIGRATE_TYPES) > + return -EINVAL; Why do callers this? > + > + /* Zone validity check */ > + for_each_populated_zone(this_zone) { > + if (zone == this_zone) > + break; > + } Why? Will take a long time if there are lots of zones. > + > + /* Got a non-existent zone from the caller? */ > + if (zone != this_zone) > + return -EINVAL; When does this happen? > + > + spin_lock_irqsave(&this_zone->lock, flags); > + > + this_list = &zone->free_area[order].free_list[migratetype]; > + if (list_empty(this_list)) { > + *page = NULL; > + ret = 1; What does this mean? > + goto out; > + } > + > + /* The caller is asking for the first free page block on the list */ > + if ((*page) == NULL) { if (!*page) is shorter and prettier. > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller is not on this free list > + * anymore (e.g. a 1MB free page block has been split). In this case, > + * offer the first page block on the free list that the caller is > + * asking for. This just might keep giving you same block over and over again. E.g. - get 1st block - get 2nd block - 2nd gets broken up - get 1st block again this way we might never make progress beyond the 1st 2 blocks > + */ > + if (PageBuddy(*page) && order != page_order(*page)) { > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller has been the last page block > + * on the list. > + */ > + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { > zoneref->zone = zone; > -- > 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49548) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dVS4K-0004Z5-DL for qemu-devel@nongnu.org; Wed, 12 Jul 2017 20:33:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dVS4H-00066L-8q for qemu-devel@nongnu.org; Wed, 12 Jul 2017 20:33:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40502) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dVS4G-00065m-V9 for qemu-devel@nongnu.org; Wed, 12 Jul 2017 20:33:21 -0400 Date: Thu, 13 Jul 2017 03:33:13 +0300 From: "Michael S. Tsirkin" Message-ID: <20170713032314-mutt-send-email-mst@kernel.org> References: <1499863221-16206-1-git-send-email-wei.w.wang@intel.com> <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> Subject: Re: [Qemu-devel] [PATCH v12 6/8] mm: support reporting free page blocks List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Wei Wang Cc: linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, liliang.opensource@gmail.com, virtio-dev@lists.oasis-open.org, yang.zhang.wz@gmail.com, quan.xu@aliyun.com On Wed, Jul 12, 2017 at 08:40:19PM +0800, Wei Wang wrote: > This patch adds support for reporting blocks of pages on the free list > specified by the caller. > > As pages can leave the free list during this call or immediately > afterwards, they are not guaranteed to be free after the function > returns. The only guarantee this makes is that the page was on the free > list at some point in time after the function has been invoked. > > Therefore, it is not safe for caller to use any pages on the returned > block or to discard data that is put there after the function returns. > However, it is safe for caller to discard data that was in one of these > pages before the function was invoked. > > Signed-off-by: Wei Wang > Signed-off-by: Liang Li > --- > include/linux/mm.h | 5 +++ > mm/page_alloc.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 101 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 46b9ac5..76cb433 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1835,6 +1835,11 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, > + struct page **page); > +#endif > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 64b7d82..8b3c9dd 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4753,6 +4753,102 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) > show_swap_cache_info(); > } > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > + > +/* > + * Heuristically get a page block in the system that is unused. > + * It is possible that pages from the page block are used immediately after > + * report_unused_page_block() returns. It is the caller's responsibility > + * to either detect or prevent the use of such pages. > + * > + * The free list to check: zone->free_area[order].free_list[migratetype]. > + * > + * If the caller supplied page block (i.e. **page) is on the free list, offer > + * the next page block on the list to the caller. Otherwise, offer the first > + * page block on the list. > + * > + * Note: it is not safe for caller to use any pages on the returned > + * block or to discard data that is put there after the function returns. > + * However, it is safe for caller to discard data that was in one of these > + * pages before the function was invoked. > + * > + * Return 0 when a page block is found on the caller specified free list. Otherwise? > + */ As an alternative, we could have an API that scans free pages and invokes a callback under a lock. Granted, this might end up staying a lot of time under a lock. Is this a big issue? Some benchmarking will tell. It would then be up to the hypervisor to decide whether it wants to play tricks with the dirty bit or just wants to drop pages while VCPU is stopped. > +int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, struct page **page) > +{ > + struct zone *this_zone; > + struct list_head *this_list; > + int ret = 0; > + unsigned long flags; > + > + /* Sanity check */ > + if (zone == NULL || page == NULL || order >= MAX_ORDER || > + migratetype >= MIGRATE_TYPES) > + return -EINVAL; Why do callers this? > + > + /* Zone validity check */ > + for_each_populated_zone(this_zone) { > + if (zone == this_zone) > + break; > + } Why? Will take a long time if there are lots of zones. > + > + /* Got a non-existent zone from the caller? */ > + if (zone != this_zone) > + return -EINVAL; When does this happen? > + > + spin_lock_irqsave(&this_zone->lock, flags); > + > + this_list = &zone->free_area[order].free_list[migratetype]; > + if (list_empty(this_list)) { > + *page = NULL; > + ret = 1; What does this mean? > + goto out; > + } > + > + /* The caller is asking for the first free page block on the list */ > + if ((*page) == NULL) { if (!*page) is shorter and prettier. > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller is not on this free list > + * anymore (e.g. a 1MB free page block has been split). In this case, > + * offer the first page block on the free list that the caller is > + * asking for. This just might keep giving you same block over and over again. E.g. - get 1st block - get 2nd block - 2nd gets broken up - get 1st block again this way we might never make progress beyond the 1st 2 blocks > + */ > + if (PageBuddy(*page) && order != page_order(*page)) { > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller has been the last page block > + * on the list. > + */ > + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { > zoneref->zone = zone; > -- > 2.7.4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: virtio-dev-return-2388-cohuck=redhat.com@lists.oasis-open.org Sender: List-Post: List-Help: List-Unsubscribe: List-Subscribe: Received: from lists.oasis-open.org (oasis-open.org [66.179.20.138]) by lists.oasis-open.org (Postfix) with ESMTP id 61C2458192C1 for ; Wed, 12 Jul 2017 17:33:21 -0700 (PDT) Date: Thu, 13 Jul 2017 03:33:13 +0300 From: "Michael S. Tsirkin" Message-ID: <20170713032314-mutt-send-email-mst@kernel.org> References: <1499863221-16206-1-git-send-email-wei.w.wang@intel.com> <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1499863221-16206-7-git-send-email-wei.w.wang@intel.com> Subject: [virtio-dev] Re: [PATCH v12 6/8] mm: support reporting free page blocks To: Wei Wang Cc: linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, liliang.opensource@gmail.com, virtio-dev@lists.oasis-open.org, yang.zhang.wz@gmail.com, quan.xu@aliyun.com List-ID: On Wed, Jul 12, 2017 at 08:40:19PM +0800, Wei Wang wrote: > This patch adds support for reporting blocks of pages on the free list > specified by the caller. > > As pages can leave the free list during this call or immediately > afterwards, they are not guaranteed to be free after the function > returns. The only guarantee this makes is that the page was on the free > list at some point in time after the function has been invoked. > > Therefore, it is not safe for caller to use any pages on the returned > block or to discard data that is put there after the function returns. > However, it is safe for caller to discard data that was in one of these > pages before the function was invoked. > > Signed-off-by: Wei Wang > Signed-off-by: Liang Li > --- > include/linux/mm.h | 5 +++ > mm/page_alloc.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 101 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 46b9ac5..76cb433 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1835,6 +1835,11 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, > + struct page **page); > +#endif > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 64b7d82..8b3c9dd 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4753,6 +4753,102 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) > show_swap_cache_info(); > } > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > + > +/* > + * Heuristically get a page block in the system that is unused. > + * It is possible that pages from the page block are used immediately after > + * report_unused_page_block() returns. It is the caller's responsibility > + * to either detect or prevent the use of such pages. > + * > + * The free list to check: zone->free_area[order].free_list[migratetype]. > + * > + * If the caller supplied page block (i.e. **page) is on the free list, offer > + * the next page block on the list to the caller. Otherwise, offer the first > + * page block on the list. > + * > + * Note: it is not safe for caller to use any pages on the returned > + * block or to discard data that is put there after the function returns. > + * However, it is safe for caller to discard data that was in one of these > + * pages before the function was invoked. > + * > + * Return 0 when a page block is found on the caller specified free list. Otherwise? > + */ As an alternative, we could have an API that scans free pages and invokes a callback under a lock. Granted, this might end up staying a lot of time under a lock. Is this a big issue? Some benchmarking will tell. It would then be up to the hypervisor to decide whether it wants to play tricks with the dirty bit or just wants to drop pages while VCPU is stopped. > +int report_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, struct page **page) > +{ > + struct zone *this_zone; > + struct list_head *this_list; > + int ret = 0; > + unsigned long flags; > + > + /* Sanity check */ > + if (zone == NULL || page == NULL || order >= MAX_ORDER || > + migratetype >= MIGRATE_TYPES) > + return -EINVAL; Why do callers this? > + > + /* Zone validity check */ > + for_each_populated_zone(this_zone) { > + if (zone == this_zone) > + break; > + } Why? Will take a long time if there are lots of zones. > + > + /* Got a non-existent zone from the caller? */ > + if (zone != this_zone) > + return -EINVAL; When does this happen? > + > + spin_lock_irqsave(&this_zone->lock, flags); > + > + this_list = &zone->free_area[order].free_list[migratetype]; > + if (list_empty(this_list)) { > + *page = NULL; > + ret = 1; What does this mean? > + goto out; > + } > + > + /* The caller is asking for the first free page block on the list */ > + if ((*page) == NULL) { if (!*page) is shorter and prettier. > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller is not on this free list > + * anymore (e.g. a 1MB free page block has been split). In this case, > + * offer the first page block on the free list that the caller is > + * asking for. This just might keep giving you same block over and over again. E.g. - get 1st block - get 2nd block - 2nd gets broken up - get 1st block again this way we might never make progress beyond the 1st 2 blocks > + */ > + if (PageBuddy(*page) && order != page_order(*page)) { > + *page = list_first_entry(this_list, struct page, lru); > + ret = 0; > + goto out; > + } > + > + /* > + * The page block passed from the caller has been the last page block > + * on the list. > + */ > + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { > zoneref->zone = zone; > -- > 2.7.4 --------------------------------------------------------------------- To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org