All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Gonglei (Arei)" <arei.gonglei@huawei.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	xuyandong <xuyandong2@huawei.com>
Cc: "Huangweidong \(C\)" <weidong.huang@huawei.com>,
	Zhanghailiang <zhang.zhanghailiang@huawei.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"wangxin \(U\)" <wangxinxin.wang@huawei.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	lidonglin <lidonglin@huawei.com>
Subject: Re: An emulation failure occurs, if I hotplug vcpus immediately after the VM start
Date: Thu, 7 Jun 2018 13:43:49 +0200	[thread overview]
Message-ID: <08a271a3-3e28-24e6-d37d-fdcc6df964bc@redhat.com> (raw)
In-Reply-To: <33183CC9F5247A488A2544077AF19020DB012108@dggeml511-mbx.china.huawei.com>

On 07.06.2018 13:13, Gonglei (Arei) wrote:
> 
>> -----Original Message-----
>> From: David Hildenbrand [mailto:david@redhat.com]
>> Sent: Thursday, June 07, 2018 6:40 PM
>> Subject: Re: An emulation failure occurs,if I hotplug vcpus immediately after the
>> VM start
>>
>> On 06.06.2018 15:57, Paolo Bonzini wrote:
>>> On 06/06/2018 15:28, Gonglei (Arei) wrote:
>>>> gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000,
>>>> mem.userspace_addr=0x7fc343ec0000, mem.flags=0, memory_size=0x0
>>>> gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000,
>>>> mem.userspace_addr=0x7fc343ec0000, mem.flags=0,
>> memory_size=0x9000
>>>>
>>>> When the memory region is cleared, the KVM will tell the slot to be
>>>> invalid (which it is set to KVM_MEMSLOT_INVALID).
>>>>
>>>> If SeaBIOS accesses this memory and cause page fault, it will find an
>>>> invalid value according to gfn (by __gfn_to_pfn_memslot), and finally
>>>> it will return an invalid value, and finally it will return a
>>>> failure.
>>>>
>>>> So, My questions are:
>>>>
>>>> 1) Why don't we hold kvm->slots_lock during page fault processing?
>>>
>>> Because it's protected by SRCU.  We don't need kvm->slots_lock on the
>>> read side.
>>>
>>>> 2) How do we assure that vcpus will not access the corresponding
>>>> region when deleting an memory slot?
>>>
>>> We don't.  It's generally a guest bug if they do, but the problem here
>>> is that QEMU is splitting a memory region in two parts and that is not
>>> atomic.
>>
>> BTW, one ugly (but QEMU-only) fix would be to temporarily pause all
>> VCPUs, do the change and then unpause all VCPUs.
>>
> 
> The updating process of memory region is triggered by vcpu thread, not
> the main process though.

Yes, I also already ran into this problem. Because it involves calling
pause_all_vcpus() from a VCPU thread. I sent a patch for that already,
but we were able to solve the s390x problem differently.

https://patchwork.kernel.org/patch/10331305/

The major problem of pause_all_vcpus() is that it will temporarily drop
the iothread mutex, which can result in "funny" side effects :) Handling
parallel call to pause_all_vcpus() is the smaller issue.

So right now, it can only be used from the main thread.

> 
> Thanks,
> -Gonglei
> 
>>>
>>> One fix could be to add a KVM_SET_USER_MEMORY_REGIONS ioctl that
>>> replaces the entire memory map atomically.
>>>
>>> Paolo
>>>
>>
>>
>> --
>>
>> Thanks,
>>
>> David / dhildenb


-- 

Thanks,

David / dhildenb

WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: "Gonglei (Arei)" <arei.gonglei@huawei.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	xuyandong <xuyandong2@huawei.com>
Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>,
	"wangxin (U)" <wangxinxin.wang@huawei.com>,
	lidonglin <lidonglin@huawei.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Huangweidong (C)" <weidong.huang@huawei.com>
Subject: Re: [Qemu-devel] An emulation failure occurs, if I hotplug vcpus immediately after the VM start
Date: Thu, 7 Jun 2018 13:43:49 +0200	[thread overview]
Message-ID: <08a271a3-3e28-24e6-d37d-fdcc6df964bc@redhat.com> (raw)
In-Reply-To: <33183CC9F5247A488A2544077AF19020DB012108@dggeml511-mbx.china.huawei.com>

On 07.06.2018 13:13, Gonglei (Arei) wrote:
> 
>> -----Original Message-----
>> From: David Hildenbrand [mailto:david@redhat.com]
>> Sent: Thursday, June 07, 2018 6:40 PM
>> Subject: Re: An emulation failure occurs,if I hotplug vcpus immediately after the
>> VM start
>>
>> On 06.06.2018 15:57, Paolo Bonzini wrote:
>>> On 06/06/2018 15:28, Gonglei (Arei) wrote:
>>>> gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000,
>>>> mem.userspace_addr=0x7fc343ec0000, mem.flags=0, memory_size=0x0
>>>> gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000,
>>>> mem.userspace_addr=0x7fc343ec0000, mem.flags=0,
>> memory_size=0x9000
>>>>
>>>> When the memory region is cleared, the KVM will tell the slot to be
>>>> invalid (which it is set to KVM_MEMSLOT_INVALID).
>>>>
>>>> If SeaBIOS accesses this memory and cause page fault, it will find an
>>>> invalid value according to gfn (by __gfn_to_pfn_memslot), and finally
>>>> it will return an invalid value, and finally it will return a
>>>> failure.
>>>>
>>>> So, My questions are:
>>>>
>>>> 1) Why don't we hold kvm->slots_lock during page fault processing?
>>>
>>> Because it's protected by SRCU.  We don't need kvm->slots_lock on the
>>> read side.
>>>
>>>> 2) How do we assure that vcpus will not access the corresponding
>>>> region when deleting an memory slot?
>>>
>>> We don't.  It's generally a guest bug if they do, but the problem here
>>> is that QEMU is splitting a memory region in two parts and that is not
>>> atomic.
>>
>> BTW, one ugly (but QEMU-only) fix would be to temporarily pause all
>> VCPUs, do the change and then unpause all VCPUs.
>>
> 
> The updating process of memory region is triggered by vcpu thread, not
> the main process though.

Yes, I also already ran into this problem. Because it involves calling
pause_all_vcpus() from a VCPU thread. I sent a patch for that already,
but we were able to solve the s390x problem differently.

https://patchwork.kernel.org/patch/10331305/

The major problem of pause_all_vcpus() is that it will temporarily drop
the iothread mutex, which can result in "funny" side effects :) Handling
parallel call to pause_all_vcpus() is the smaller issue.

So right now, it can only be used from the main thread.

> 
> Thanks,
> -Gonglei
> 
>>>
>>> One fix could be to add a KVM_SET_USER_MEMORY_REGIONS ioctl that
>>> replaces the entire memory map atomically.
>>>
>>> Paolo
>>>
>>
>>
>> --
>>
>> Thanks,
>>
>> David / dhildenb


-- 

Thanks,

David / dhildenb

  reply	other threads:[~2018-06-07 11:43 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-01  8:17 An emulation failure occurs, if I hotplug vcpus immediately after the VM start xuyandong
2018-06-01  8:17 ` [Qemu-devel] " xuyandong
2018-06-01 10:23 ` Igor Mammedov
2018-06-01 10:23   ` [Qemu-devel] " Igor Mammedov
2018-06-06 13:28   ` Gonglei (Arei)
2018-06-06 13:28     ` [Qemu-devel] " Gonglei (Arei)
2018-06-06 13:57     ` Paolo Bonzini
2018-06-06 13:57       ` [Qemu-devel] " Paolo Bonzini
2018-06-06 14:18       ` xuyandong
2018-06-06 14:18         ` [Qemu-devel] " xuyandong
2018-06-06 14:23         ` Paolo Bonzini
2018-06-06 14:23           ` [Qemu-devel] " Paolo Bonzini
2018-06-07 10:37       ` David Hildenbrand
2018-06-07 10:37         ` [Qemu-devel] " David Hildenbrand
2018-06-07 11:02         ` Paolo Bonzini
2018-06-07 11:02           ` [Qemu-devel] " Paolo Bonzini
2018-06-07 11:36           ` David Hildenbrand
2018-06-07 11:36             ` [Qemu-devel] " David Hildenbrand
2018-06-07 12:36             ` Paolo Bonzini
2018-06-07 12:36               ` [Qemu-devel] " Paolo Bonzini
2018-06-07 12:55               ` David Hildenbrand
2018-06-07 12:55                 ` [Qemu-devel] " David Hildenbrand
2018-06-07 16:03                 ` 浙大邮箱
2018-06-07 16:03                   ` [Qemu-devel] " 浙大邮箱
2018-06-11 10:44                   ` David Hildenbrand
2018-06-11 10:44                     ` [Qemu-devel] " David Hildenbrand
2018-06-11 12:25                     ` Gonglei (Arei)
2018-06-11 12:25                       ` [Qemu-devel] " Gonglei (Arei)
2018-06-11 12:36                       ` David Hildenbrand
2018-06-11 12:36                         ` [Qemu-devel] " David Hildenbrand
2018-06-11 13:25                         ` Gonglei (Arei)
2018-06-11 13:25                           ` [Qemu-devel] " Gonglei (Arei)
2018-06-07 10:39       ` David Hildenbrand
2018-06-07 10:39         ` [Qemu-devel] " David Hildenbrand
2018-06-07 11:13         ` Gonglei (Arei)
2018-06-07 11:13           ` [Qemu-devel] " Gonglei (Arei)
2018-06-07 11:43           ` David Hildenbrand [this message]
2018-06-07 11:43             ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=08a271a3-3e28-24e6-d37d-fdcc6df964bc@redhat.com \
    --to=david@redhat.com \
    --cc=arei.gonglei@huawei.com \
    --cc=imammedo@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=lidonglin@huawei.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wangxinxin.wang@huawei.com \
    --cc=weidong.huang@huawei.com \
    --cc=xuyandong2@huawei.com \
    --cc=zhang.zhanghailiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.