All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gavin Shan <gshan@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: drjones@redhat.com, david@redhat.com,
	richard.henderson@linaro.org, qemu-devel@nongnu.org,
	eric.auger@redhat.com, qemu-arm@nongnu.org, shan.gavin@gmail.com,
	jonathan.cameron@huawei.com, imammedo@redhat.com
Subject: Re: [PATCH v3 2/2] hw/arm/virt: Support for virtio-mem-pci
Date: Sat, 8 Jan 2022 15:21:55 +0800	[thread overview]
Message-ID: <3528fa8b-bfa6-2127-dfe6-4135b3b0989f@redhat.com> (raw)
In-Reply-To: <CAFEAcA8hd000vwp8A602uw4yueea4uU0xttELcC8sn34X+N5-A@mail.gmail.com>

Hi Peter,

On 1/8/22 12:40 AM, Peter Maydell wrote:
> On Fri, 3 Dec 2021 at 23:34, Gavin Shan <gshan@redhat.com> wrote:
>>
>> This supports virtio-mem-pci device on "virt" platform, by simply
>> following the implementation on x86.
>>
>>     * This implements the hotplug handlers to support virtio-mem-pci
>>       device hot-add, while the hot-remove isn't supported as we have
>>       on x86.
>>
>>     * The block size is 512MB on ARM64 instead of 128MB on x86.
>>
>>     * It has been passing the tests with various combinations like 64KB
>>       and 4KB page sizes on host and guest, different memory device
>>       backends like normal, transparent huge page and HugeTLB, plus
>>       migration.
>>
>> Co-developed-by: David Hildenbrand <david@redhat.com>
>> Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
>> Reviewed-by: David Hildenbrand <david@redhat.com>
> 
> 
>> +static void virt_virtio_md_pci_pre_plug(HotplugHandler *hotplug_dev,
>> +                                        DeviceState *dev, Error **errp)
>> +{
>> +    HotplugHandler *hotplug_dev2 = qdev_get_bus_hotplug_handler(dev);
>> +    Error *local_err = NULL;
>> +
>> +    if (!hotplug_dev2 && dev->hotplugged) {
>> +        /*
>> +         * Without a bus hotplug handler, we cannot control the plug/unplug
>> +         * order. We should never reach this point when hotplugging on x86,
>> +         * however, better add a safety net.
>> +         */
> 
> This comment looks like it was cut-n-pasted from x86 -- is whatever
> it is that prevents us from reaching this point also true for arm ?
> (What is the thing that prevents us reaching this point?)
> 

Yeah, the comment was copied from x86. It's also true for ARM as a hotplug
controller on the parent bus is required for virtio-mem-pci device hot-add,
according to the following commit log.

commit a0a49813f7f2fc23bfe8a4fc6760e2a60c9a3e59
Author: David Hildenbrand <david@redhat.com>
Date:   Wed Jun 19 15:19:07 2019 +0530

     pc: Support for virtio-pmem-pci
     
     Override the device hotplug handler to properly handle the memory device
     part via virtio-pmem-pci callbacks from the machine hotplug handler and
     forward to the actual PCI bus hotplug handler.
     
     As PCI hotplug has not been properly factored out into hotplug handlers,
     most magic is performed in the (un)realize functions. Also some PCI host
     buses don't have a PCI hotplug handler at all yet, just to be sure that
     we alway have a hotplug handler on x86, add a simple error check.
     
     Unlocking virtio-pmem will unlock virtio-pmem-pci.
     
     Signed-off-by: David Hildenbrand <david@redhat.com>

However, I don't think the comment we have for ARM is precise enough because
it's irrelevant to x86. I will change it something like below in v4:

	/*
	 * Without a bus hotplug handler, we cannot control the plug/unplug
	 * order. We should never reach this point when hotplugging on ARM.
	 * However, it's nice to add a safety net, similar to what we have
          * on x86.
	 */


>> +        error_setg(errp, "hotplug of virtio based memory devices not supported"
>> +                   " on this bus.");
>> +        return;
>> +    }
>> +    /*
>> +     * First, see if we can plug this memory device at all. If that
>> +     * succeeds, branch of to the actual hotplug handler.
>> +     */
>> +    memory_device_pre_plug(MEMORY_DEVICE(dev), MACHINE(hotplug_dev), NULL,
>> +                           &local_err);
>> +    if (!local_err && hotplug_dev2) {
>> +        hotplug_handler_pre_plug(hotplug_dev2, dev, &local_err);
>> +    }
>> +    error_propagate(errp, local_err);
>> +}
> 
> 
> 
>> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
>> index b20595a496..21e4d572ab 100644
>> --- a/hw/virtio/virtio-mem.c
>> +++ b/hw/virtio/virtio-mem.c
>> @@ -125,7 +125,7 @@ static uint64_t virtio_mem_default_block_size(RAMBlock *rb)
>>    * The memory block size corresponds mostly to the section size.
>>    *
>>    * This allows e.g., to add 20MB with a section size of 128MB on x86_64, and
>> - * a section size of 1GB on arm64 (as long as the start address is properly
>> + * a section size of 512MB on arm64 (as long as the start address is properly
>>    * aligned, similar to ordinary DIMMs).
>>    *
>>    * We can change this at any time and maybe even make it configurable if
>> @@ -134,6 +134,8 @@ static uint64_t virtio_mem_default_block_size(RAMBlock *rb)
>>    */
>>   #if defined(TARGET_X86_64) || defined(TARGET_I386)
>>   #define VIRTIO_MEM_USABLE_EXTENT (2 * (128 * MiB))
>> +#elif defined(TARGET_ARM)
>> +#define VIRTIO_MEM_USABLE_EXTENT (2 * (512 * MiB))
>>   #else
>>   #error VIRTIO_MEM_USABLE_EXTENT not defined
>>   #endif
> 
> Could this comment explain where the 128MB and 512MB come from
> and why the value is different for different architectures ?
> 

Yes, the comment already explained it by "section size", which is the
minimal hotpluggable unit. It's defined by the linux guest kernel as
below. On ARM64, we pick the larger section size without considering
the base page size. Besides, the virtio-mem is/will-be enabled on
x86_64 and ARM64 guest kernel only.

#define SECTION_SIZE_BITS  29      /* ARM:    64KB base page size        */
#define SECTION_SIZE_BITS  27      /* ARM:    16KB or 4KB base page size */
#define SECTION_SIZE_BITS  27      /* x86_64                             */

Thanks,
Gavin



  reply	other threads:[~2022-01-08  7:49 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-03 23:34 [PATCH v3 0/2] hw/arm/virt: Support for virtio-mem-pci Gavin Shan
2021-12-03 23:34 ` [PATCH v3 1/2] virtio-mem: Correct default THP size for ARM64 Gavin Shan
2021-12-03 23:34 ` [PATCH v3 2/2] hw/arm/virt: Support for virtio-mem-pci Gavin Shan
2022-01-07 16:40   ` Peter Maydell
2022-01-08  7:21     ` Gavin Shan [this message]
2022-01-10 10:50       ` Peter Maydell
2022-01-10 10:59         ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3528fa8b-bfa6-2127-dfe6-4135b3b0989f@redhat.com \
    --to=gshan@redhat.com \
    --cc=david@redhat.com \
    --cc=drjones@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=shan.gavin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.