linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michael Kelley <mhklinux@outlook.com>
To: Michael Kelley <mhklinux@outlook.com>,
	"haiyangz@microsoft.com" <haiyangz@microsoft.com>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	"decui@microsoft.com" <decui@microsoft.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>
Subject: RE: [PATCH 1/1] hv_balloon: Enable hot-add for memblock sizes > 128 Mbytes
Date: Thu, 25 Apr 2024 14:31:52 +0000	[thread overview]
Message-ID: <SN6PR02MB4157F73AB998A1C247177609D4172@SN6PR02MB4157.namprd02.prod.outlook.com> (raw)
In-Reply-To: <20240311181238.1241-1-mhklinux@outlook.com>

From: mhkelley58@gmail.com <mhkelley58@gmail.com> Sent: Monday, March 11, 2024 11:13 AM
> 
> The Hyper-V balloon driver supports hot-add of memory in addition
> to ballooning. Current code hot-adds in fixed size chunks of
> 128 Mbytes (fixed constant HA_CHUNK in the code).  While this works
> in Hyper-V VMs with 64 Gbytes or less or memory where the Linux
> memblock size is 128 Mbytes, the hot-add fails for larger memblock
> sizes because add_memory() expects memory to be added in chunks
> that match the memblock size. Messages like the following are
> reported when Linux has a 256 Mbyte memblock size:
> 
> [  312.668859] Block size [0x10000000] unaligned hotplug range:
>                start 0x310000000, size 0x8000000
> [  312.668880] hv_balloon: hot_add memory failed error is -22
> [  312.668984] hv_balloon: Memory hot add failed
> 
> Larger memblock sizes are usually used in VMs with more than
> 64 Gbytes of memory, depending on the alignment of the VM's
> physical address space.
> 
> Fix this problem by having the Hyper-V balloon driver determine
> the Linux memblock size, and process hot-add requests in that
> chunk size instead of a fixed 128 Mbytes. Also update the hot-add
> alignment requested of the Hyper-V host to match the memblock
> size instead of being a fixed 128 Mbytes.
> 
> The code changes look significant, but in fact are just a
> simple text substitution of a new global variable for the
> previous HA_CHUNK constant. No algorithms are changed except
> to initialize the new global variable and to calculate the
> alignment value to pass to Hyper-V. Testing with memblock
> sizes of 256 Mbytes and 2 Gbytes shows correct operation.
> 
> Signed-off-by: Michael Kelley <mhklinux@outlook.com>

Anyone interested in reviewing this patch?  It's not an urgent
issue, but as memory sizes get larger, someone might trip
over the problem.

FWIW, these changes conflict with some of Aditya Nagesh's
whitespace cleanup in patch [1].  I'll respin this one to
sync with those changes if Aditya's patch goes in first.

Michael

[1] https://lore.kernel.org/linux-hyperv/1713842326-25576-1-git-send-email-adityanagesh@linux.microsoft.com/

> ---
>  drivers/hv/hv_balloon.c | 64 ++++++++++++++++++++++++-----------------
>  1 file changed, 37 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index e000fa3b9f97..d3bfbf3d274a 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -425,11 +425,11 @@ struct dm_info_msg {
>   * The range start_pfn : end_pfn specifies the range
>   * that the host has asked us to hot add. The range
>   * start_pfn : ha_end_pfn specifies the range that we have
> - * currently hot added. We hot add in multiples of 128M
> - * chunks; it is possible that we may not be able to bring
> - * online all the pages in the region. The range
> + * currently hot added. We hot add in chunks equal to the
> + * memory block size; it is possible that we may not be able
> + * to bring online all the pages in the region. The range
>   * covered_start_pfn:covered_end_pfn defines the pages that can
> - * be brough online.
> + * be brought online.
>   */
> 
>  struct hv_hotadd_state {
> @@ -505,8 +505,9 @@ enum hv_dm_state {
> 
>  static __u8 recv_buffer[HV_HYP_PAGE_SIZE];
>  static __u8 balloon_up_send_buffer[HV_HYP_PAGE_SIZE];
> +static unsigned long ha_chunk_pgs;
> +
>  #define PAGES_IN_2M (2 * 1024 * 1024 / PAGE_SIZE)
> -#define HA_CHUNK (128 * 1024 * 1024 / PAGE_SIZE)
> 
>  struct hv_dynmem_device {
>  	struct hv_device *dev;
> @@ -724,15 +725,15 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
>  	unsigned long processed_pfn;
>  	unsigned long total_pfn = pfn_count;
> 
> -	for (i = 0; i < (size/HA_CHUNK); i++) {
> -		start_pfn = start + (i * HA_CHUNK);
> +	for (i = 0; i < (size/ha_chunk_pgs); i++) {
> +		start_pfn = start + (i * ha_chunk_pgs);
> 
>  		scoped_guard(spinlock_irqsave, &dm_device.ha_lock) {
> -			has->ha_end_pfn +=  HA_CHUNK;
> +			has->ha_end_pfn +=  ha_chunk_pgs;
> 
> -			if (total_pfn > HA_CHUNK) {
> -				processed_pfn = HA_CHUNK;
> -				total_pfn -= HA_CHUNK;
> +			if (total_pfn > ha_chunk_pgs) {
> +				processed_pfn = ha_chunk_pgs;
> +				total_pfn -= ha_chunk_pgs;
>  			} else {
>  				processed_pfn = total_pfn;
>  				total_pfn = 0;
> @@ -745,7 +746,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
> 
>  		nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn));
>  		ret = add_memory(nid, PFN_PHYS((start_pfn)),
> -				(HA_CHUNK << PAGE_SHIFT), MHP_MERGE_RESOURCE);
> +				(ha_chunk_pgs << PAGE_SHIFT), MHP_MERGE_RESOURCE);
> 
>  		if (ret) {
>  			pr_err("hot_add memory failed error is %d\n", ret);
> @@ -760,7 +761,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
>  				do_hot_add = false;
>  			}
>  			scoped_guard(spinlock_irqsave, &dm_device.ha_lock) {
> -				has->ha_end_pfn -= HA_CHUNK;
> +				has->ha_end_pfn -= ha_chunk_pgs;
>  				has->covered_end_pfn -=  processed_pfn;
>  			}
>  			break;
> @@ -838,11 +839,11 @@ static int pfn_covered(unsigned long start_pfn, unsigned long pfn_cnt)
>  		if ((start_pfn + pfn_cnt) > has->end_pfn) {
>  			residual = (start_pfn + pfn_cnt - has->end_pfn);
>  			/*
> -			 * Extend the region by multiples of HA_CHUNK.
> +			 * Extend the region by multiples of ha_chunk_pgs.
>  			 */
> -			new_inc = (residual / HA_CHUNK) * HA_CHUNK;
> -			if (residual % HA_CHUNK)
> -				new_inc += HA_CHUNK;
> +			new_inc = (residual / ha_chunk_pgs) * ha_chunk_pgs;
> +			if (residual % ha_chunk_pgs)
> +				new_inc += ha_chunk_pgs;
> 
>  			has->end_pfn += new_inc;
>  		}
> @@ -910,14 +911,14 @@ static unsigned long handle_pg_range(unsigned long pg_start,
>  			 * We have some residual hot add range
>  			 * that needs to be hot added; hot add
>  			 * it now. Hot add a multiple of
> -			 * HA_CHUNK that fully covers the pages
> +			 * ha_chunk_pgs that fully covers the pages
>  			 * we have.
>  			 */
>  			size = (has->end_pfn - has->ha_end_pfn);
>  			if (pfn_cnt <= size) {
> -				size = ((pfn_cnt / HA_CHUNK) * HA_CHUNK);
> -				if (pfn_cnt % HA_CHUNK)
> -					size += HA_CHUNK;
> +				size = ((pfn_cnt / ha_chunk_pgs) * ha_chunk_pgs);
> +				if (pfn_cnt % ha_chunk_pgs)
> +					size += ha_chunk_pgs;
>  			} else {
>  				pfn_cnt = size;
>  			}
> @@ -1021,11 +1022,11 @@ static void hot_add_req(struct work_struct *dummy)
>  		 * that need to be hot-added while ensuring the alignment
>  		 * and size requirements of Linux as it relates to hot-add.
>  		 */
> -		region_size = (pfn_cnt / HA_CHUNK) * HA_CHUNK;
> -		if (pfn_cnt % HA_CHUNK)
> -			region_size += HA_CHUNK;
> +		region_size = (pfn_cnt / ha_chunk_pgs) * ha_chunk_pgs;
> +		if (pfn_cnt % ha_chunk_pgs)
> +			region_size += ha_chunk_pgs;
> 
> -		region_start = (pg_start / HA_CHUNK) * HA_CHUNK;
> +		region_start = (pg_start / ha_chunk_pgs) * ha_chunk_pgs;
> 
>  		rg_start = region_start;
>  		rg_sz = region_size;
> @@ -1832,9 +1833,12 @@ static int balloon_connect_vsp(struct hv_device *dev)
> 
>  	/*
>  	 * Specify our alignment requirements as it relates
> -	 * memory hot-add. Specify 128MB alignment.
> +	 * memory hot-add.  The value is the log base 2 of
> +	 * the number of megabytes in a chunk. For example,
> +	 * with 256 Mbyte chunks, the value is 8.
>  	 */
> -	cap_msg.caps.cap_bits.hot_add_alignment = 7;
> +	cap_msg.caps.cap_bits.hot_add_alignment =
> +			ilog2(ha_chunk_pgs >> (20 - PAGE_SHIFT));
> 
>  	/*
>  	 * Currently the host does not use these
> @@ -2156,6 +2160,12 @@ static  struct hv_driver balloon_drv = {
> 
>  static int __init init_balloon_drv(void)
>  {
> +	/*
> +	 * Hot-add must operate in chunks that are of size
> +	 * equal to the memory block size because that's
> +	 * what the core add_memory() interface requires.
> +	 */
> +	ha_chunk_pgs = memory_block_size_bytes() / PAGE_SIZE;
> 
>  	return vmbus_driver_register(&balloon_drv);
>  }
> --
> 2.25.1
> 


  reply	other threads:[~2024-04-25 14:31 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-11 18:12 [PATCH 1/1] hv_balloon: Enable hot-add for memblock sizes > 128 Mbytes mhkelley58
2024-04-25 14:31 ` Michael Kelley [this message]
2024-04-26  8:58 ` David Hildenbrand
2024-04-26 16:35   ` Michael Kelley
2024-04-29 15:30     ` Michael Kelley
2024-04-30  8:52       ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR02MB4157F73AB998A1C247177609D4172@SN6PR02MB4157.namprd02.prod.outlook.com \
    --to=mhklinux@outlook.com \
    --cc=decui@microsoft.com \
    --cc=haiyangz@microsoft.com \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=wei.liu@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).