From: Keqian Zhu <zhukeqian1@huawei.com>
To: Peter Xu <peterx@redhat.com>, <qemu-devel@nongnu.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Hyman <huangy81@chinatelecom.cn>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v5 02/10] KVM: Use a big lock to replace per-kml slots_lock
Date: Mon, 22 Mar 2021 18:47:58 +0800 [thread overview]
Message-ID: <71e85a52-7385-e88a-f51f-9371bc371d06@huawei.com> (raw)
In-Reply-To: <20210310203301.194842-3-peterx@redhat.com>
Hi Peter,
On 2021/3/11 4:32, Peter Xu wrote:
> Per-kml slots_lock will bring some trouble if we want to take all slots_lock of
> all the KMLs, especially when we're in a context that we could have taken some
> of the KML slots_lock, then we even need to figure out what we've taken and
> what we need to take.
>
> Make this simple by merging all KML slots_lock into a single slots lock.
>
> Per-kml slots_lock isn't anything that helpful anyway - so far only x86 has two
> address spaces (so, two slots_locks). All the rest archs will be having one
> address space always, which means there's actually one slots_lock so it will be
> the same as before.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> accel/kvm/kvm-all.c | 32 +++++++++++++++++---------------
> include/sysemu/kvm_int.h | 2 --
> 2 files changed, 17 insertions(+), 17 deletions(-)
>
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index f88a52393fe..94e881f123b 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -174,8 +174,10 @@ typedef struct KVMResampleFd KVMResampleFd;
> static QLIST_HEAD(, KVMResampleFd) kvm_resample_fd_list =
> QLIST_HEAD_INITIALIZER(kvm_resample_fd_list);
>
> -#define kvm_slots_lock(kml) qemu_mutex_lock(&(kml)->slots_lock)
> -#define kvm_slots_unlock(kml) qemu_mutex_unlock(&(kml)->slots_lock)
> +static QemuMutex kml_slots_lock;
> +
> +#define kvm_slots_lock() qemu_mutex_lock(&kml_slots_lock)
> +#define kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
nit: qemu_mutex_lock and qemu_mutex_unlock is not aligned.
>
> static inline void kvm_resample_fd_remove(int gsi)
> {
> @@ -241,9 +243,9 @@ bool kvm_has_free_slot(MachineState *ms)
> bool result;
> KVMMemoryListener *kml = &s->memory_listener;
>
> - kvm_slots_lock(kml);
> + kvm_slots_lock();
> result = !!kvm_get_free_slot(kml);
> - kvm_slots_unlock(kml);
> + kvm_slots_unlock();
>
> return result;
> }
> @@ -309,7 +311,7 @@ int kvm_physical_memory_addr_from_host(KVMState *s, void *ram,
> KVMMemoryListener *kml = &s->memory_listener;
> int i, ret = 0;
>
> - kvm_slots_lock(kml);
> + kvm_slots_lock();
> for (i = 0; i < s->nr_slots; i++) {
> KVMSlot *mem = &kml->slots[i];
>
> @@ -319,7 +321,7 @@ int kvm_physical_memory_addr_from_host(KVMState *s, void *ram,
> break;
> }
> }
> - kvm_slots_unlock(kml);
> + kvm_slots_unlock();
>
> return ret;
> }
> @@ -515,7 +517,7 @@ static int kvm_section_update_flags(KVMMemoryListener *kml,
> return 0;
> }
>
> - kvm_slots_lock(kml);
> + kvm_slots_lock();
>
> while (size && !ret) {
> slot_size = MIN(kvm_max_slot_size, size);
> @@ -531,7 +533,7 @@ static int kvm_section_update_flags(KVMMemoryListener *kml,
> }
>
> out:
> - kvm_slots_unlock(kml);
> + kvm_slots_unlock();
> return ret;
> }
>
> @@ -819,7 +821,7 @@ static int kvm_physical_log_clear(KVMMemoryListener *kml,
> return ret;
> }
>
> - kvm_slots_lock(kml);
> + kvm_slots_lock();
>
> for (i = 0; i < s->nr_slots; i++) {
> mem = &kml->slots[i];
> @@ -845,7 +847,7 @@ static int kvm_physical_log_clear(KVMMemoryListener *kml,
> }
> }
>
> - kvm_slots_unlock(kml);
> + kvm_slots_unlock();
>
> return ret;
> }
> @@ -1150,7 +1152,7 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
> ram = memory_region_get_ram_ptr(mr) + section->offset_within_region +
> (start_addr - section->offset_within_address_space);
>
> - kvm_slots_lock(kml);
> + kvm_slots_lock();
>
> if (!add) {
> do {
> @@ -1208,7 +1210,7 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
> } while (size);
>
> out:
> - kvm_slots_unlock(kml);
> + kvm_slots_unlock();
> }
>
> static void kvm_region_add(MemoryListener *listener,
> @@ -1235,9 +1237,9 @@ static void kvm_log_sync(MemoryListener *listener,
> KVMMemoryListener *kml = container_of(listener, KVMMemoryListener, listener);
> int r;
>
> - kvm_slots_lock(kml);
> + kvm_slots_lock();
> r = kvm_physical_sync_dirty_bitmap(kml, section);
> - kvm_slots_unlock(kml);
> + kvm_slots_unlock();
> if (r < 0) {
> abort();
> }
> @@ -1337,7 +1339,7 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
> {
> int i;
>
> - qemu_mutex_init(&kml->slots_lock);
> + qemu_mutex_init(&kml_slots_lock);
As you said, x86 has two address spaces, is it a problem that we may have multi initialization for kml_slots_lock?
Thanks,
Keqian
> kml->slots = g_malloc0(s->nr_slots * sizeof(KVMSlot));
> kml->as_id = as_id;
>
> diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h
> index ccb8869f01b..1da30e18841 100644
> --- a/include/sysemu/kvm_int.h
> +++ b/include/sysemu/kvm_int.h
> @@ -27,8 +27,6 @@ typedef struct KVMSlot
>
> typedef struct KVMMemoryListener {
> MemoryListener listener;
> - /* Protects the slots and all inside them */
> - QemuMutex slots_lock;
> KVMSlot *slots;
> int as_id;
> } KVMMemoryListener;
>
next prev parent reply other threads:[~2021-03-22 10:51 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-10 20:32 [PATCH v5 00/10] KVM: Dirty ring support (QEMU part) Peter Xu
2021-03-10 20:32 ` [PATCH v5 01/10] memory: Introduce log_sync_global() to memory listener Peter Xu
2021-03-10 20:32 ` [PATCH v5 02/10] KVM: Use a big lock to replace per-kml slots_lock Peter Xu
2021-03-22 10:47 ` Keqian Zhu [this message]
2021-03-22 13:54 ` Paolo Bonzini
2021-03-22 16:27 ` Peter Xu
2021-03-24 18:08 ` Peter Xu
2021-03-10 20:32 ` [PATCH v5 03/10] KVM: Create the KVMSlot dirty bitmap on flag changes Peter Xu
2021-03-10 20:32 ` [PATCH v5 04/10] KVM: Provide helper to get kvm dirty log Peter Xu
2021-03-10 20:32 ` [PATCH v5 05/10] KVM: Provide helper to sync dirty bitmap from slot to ramblock Peter Xu
2021-03-10 20:32 ` [PATCH v5 06/10] KVM: Simplify dirty log sync in kvm_set_phys_mem Peter Xu
2021-03-10 20:32 ` [PATCH v5 07/10] KVM: Cache kvm slot dirty bitmap size Peter Xu
2021-03-10 20:32 ` [PATCH v5 08/10] KVM: Add dirty-gfn-count property Peter Xu
2021-03-10 20:33 ` [PATCH v5 09/10] KVM: Disable manual dirty log when dirty ring enabled Peter Xu
2021-03-22 9:17 ` Keqian Zhu
2021-03-22 13:55 ` Paolo Bonzini
2021-03-22 16:21 ` Peter Xu
2021-03-10 20:33 ` [PATCH v5 10/10] KVM: Dirty ring support Peter Xu
2021-03-22 13:37 ` Keqian Zhu
2021-03-22 18:52 ` Peter Xu
2021-03-23 1:25 ` Keqian Zhu
2021-03-19 18:12 ` [PATCH v5 00/10] KVM: Dirty ring support (QEMU part) Peter Xu
2021-03-22 14:02 ` Keqian Zhu
2021-03-22 19:45 ` Peter Xu
2021-03-23 6:40 ` Keqian Zhu
2021-03-23 14:34 ` Peter Xu
2021-03-24 2:56 ` Keqian Zhu
2021-03-24 15:09 ` Peter Xu
2021-03-25 1:21 ` Keqian Zhu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=71e85a52-7385-e88a-f51f-9371bc371d06@huawei.com \
--to=zhukeqian1@huawei.com \
--cc=dgilbert@redhat.com \
--cc=huangy81@chinatelecom.cn \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).