qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Nan Wang <wangnan.light@bytedance.com>
To: David Hildenbrand <david@redhat.com>, imammedo@redhat.com
Cc: mikughoull@gmail.com, ehabkost@redhat.com, qemu-devel@nongnu.org
Subject: Re: [External] Re: [PATCH] hostmem: change default prealloc threads number
Date: Tue, 28 Sep 2021 00:47:01 +0800	[thread overview]
Message-ID: <e1a98460-ad0a-9b9c-5958-bb39635886ec@bytedance.com> (raw)
In-Reply-To: <b14aebb1-489b-b15b-f9eb-047073920175@redhat.com>



On 2021/9/27 11:16, David Hildenbrand wrote:
> On 27.09.21 15:19, Nan Wang wrote:
>> From: "wangnan.light" <wangnan.light@bytedance.com>
>>
>> the default number of prealloc threads is 1, for huge memory backend
>> file, single thread touch page is really slow.
>> We can adjust thread number by prealloc-threads property, but if the
>> default value updated to MachineState::smp::cpus may be better.
>> For example, old version of qemu(prealloc-threads have not been
>> introduced yet), the value of threads num is MachineState::smp::cpus,
>> so if someone use the same commandline to start current verion of qemu
>> and old version of qemu which will lead to different behaviors.
> 
> The introducing patch mentions:
> 
> commit ffac16fab33bb42f17e47624985220c1fd864e9d
> Author: Igor Mammedov <imammedo@redhat.com>
> Date:   Wed Feb 19 11:09:50 2020 -0500
> 
>      hostmem: introduce "prealloc-threads" property
> 
>      the property will allow user to specify number of threads to use
>      in pre-allocation stage. It also will allow to reduce implicit
>      hostmem dependency on current_machine.
>      On object creation it will default to 1, but via machine
>      compat property it will be updated to MachineState::smp::cpus
>      to keep current behavior for hostmem and main RAM (which is
>      now also hostmem based).
> 
> So it looks like we want to do the latter via compat properties eventually.
> 
> However, I'd like to note that more prealloc threads might be good for 
> large backends, and might be bad for small backends. To me, it feels 
> like a workload that relies on this should really do this manually. So I 
> am still not sure if this is the right thing to do.
Yes, I agree with you "more prealloc threas are good for large backends, 
and bad for small backends". But I think most situation large backends 
always with large vcpu numbers and small backens always with small vcpu 
numbers, because most users will not create a vm with large vcpu numbers 
with small memory.


> 
> Note that qapi/qom.json:
> 
> "@prealloc-threads: number of CPU threads to use for prealloc (default: 
> 1", so that doc would be wrong now.
> 
> Why exactly can't workload that cares not simply set this manually? 
> Performance tuning smells like something to be done manually for a 
> specific workload.
>
It is a simply way that let workload set the prealloc threads manually. 
For example, for large backends it set many prealloc threads, and set 1 
prealloc threads manually for small backends. Yes, workload can 
`maunally` set prealloc thread to 1, rather than use `default` value 1.
So when workload want to(or maybe just forget specify the 
prealloc-threads property) use the default value, I think the 
MachineState::smp::cpus maybe better than 1.


>>
>> Signed-off-by: wangnan.light <wangnan.light@bytedance.com>
>> ---
>>   backends/hostmem.c | 2 +-
>>   hw/core/machine.c  | 5 +++++
>>   2 files changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/backends/hostmem.c b/backends/hostmem.c
>> index 4c05862ed5..c4a249b7e6 100644
>> --- a/backends/hostmem.c
>> +++ b/backends/hostmem.c
>> @@ -273,7 +273,7 @@ static void host_memory_backend_init(Object *obj)
>>       backend->merge = machine_mem_merge(machine);
>>       backend->dump = machine_dump_guest_core(machine);
>>       backend->reserve = true;
>> -    backend->prealloc_threads = 1;
>> +    backend->prealloc_threads = machine_smp_cpus(machine);
>>   }
>>   static void host_memory_backend_post_init(Object *obj)
>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>> index 067f42b528..95ba5b1477 100644
>> --- a/hw/core/machine.c
>> +++ b/hw/core/machine.c
>> @@ -1065,6 +1065,11 @@ bool machine_dump_guest_core(MachineState 
>> *machine)
>>       return machine->dump_guest_core;
>>   }
>> +bool machine_smp_cpus(MachineState *machine)
>> +{
>> +    return machine->smp.cpus;
>> +}
>> +
>>   bool machine_mem_merge(MachineState *machine)
>>   {
>>       return machine->mem_merge;
>>
> 
> 


  reply	other threads:[~2021-09-27 17:34 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20210927131951.1810823-1-user@n248-145-203>
2021-09-27 15:16 ` [PATCH] hostmem: change default prealloc threads number David Hildenbrand
2021-09-27 16:47   ` Nan Wang [this message]
2021-09-29  9:05     ` [External] " Igor Mammedov
2021-09-29  9:10       ` David Hildenbrand
2021-09-29  9:22       ` Daniel P. Berrangé
2021-09-29 14:46         ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e1a98460-ad0a-9b9c-5958-bb39635886ec@bytedance.com \
    --to=wangnan.light@bytedance.com \
    --cc=david@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mikughoull@gmail.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).