All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
@ 2021-05-28 19:11 Paolo Bonzini
  2021-05-28 19:51 ` Maciej S. Szmigiero
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2021-05-28 19:11 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Maciej S . Szmigiero

The memory that is allocated in vm_create is already mapped close to
GPA 0, because test_execute passes the requested memory to
prepare_vm.  This causes overlapping memory regions and the
test crashes.  For simplicity just move MEM_GPA higher.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tools/testing/selftests/kvm/memslot_perf_test.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 11239652d805..6d28e920b1e3 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -29,7 +29,7 @@
 
 #define MEM_SIZE		((512U << 20) + 4096)
 #define MEM_SIZE_PAGES		(MEM_SIZE / 4096)
-#define MEM_GPA		0x10000000UL
+#define MEM_GPA			(MEM_SIZE + 0x10000000UL)
 #define MEM_AUX_GPA		MEM_GPA
 #define MEM_SYNC_GPA		MEM_AUX_GPA
 #define MEM_TEST_GPA		(MEM_AUX_GPA + 4096)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-05-28 19:11 [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test Paolo Bonzini
@ 2021-05-28 19:51 ` Maciej S. Szmigiero
  2021-05-29 10:20   ` Paolo Bonzini
  0 siblings, 1 reply; 12+ messages in thread
From: Maciej S. Szmigiero @ 2021-05-28 19:51 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm

On 28.05.2021 21:11, Paolo Bonzini wrote:
> The memory that is allocated in vm_create is already mapped close to
> GPA 0, because test_execute passes the requested memory to
> prepare_vm.  This causes overlapping memory regions and the
> test crashes.  For simplicity just move MEM_GPA higher.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

I am not sure that I understand the issue correctly, is vm_create_default()
already reserving low GPAs (around 0x10000000) on some arches or run
environments?

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-05-28 19:51 ` Maciej S. Szmigiero
@ 2021-05-29 10:20   ` Paolo Bonzini
  2021-05-29 23:13     ` Maciej S. Szmigiero
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2021-05-29 10:20 UTC (permalink / raw)
  To: Maciej S. Szmigiero; +Cc: linux-kernel, kvm

On 28/05/21 21:51, Maciej S. Szmigiero wrote:
> On 28.05.2021 21:11, Paolo Bonzini wrote:
>> The memory that is allocated in vm_create is already mapped close to
>> GPA 0, because test_execute passes the requested memory to
>> prepare_vm.  This causes overlapping memory regions and the
>> test crashes.  For simplicity just move MEM_GPA higher.
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> I am not sure that I understand the issue correctly, is vm_create_default()
> already reserving low GPAs (around 0x10000000) on some arches or run
> environments?

It maps the number of pages you pass in the second argument, see
vm_create.

   if (phy_pages != 0)
     vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
                                 0, 0, phy_pages, 0);

In this case:

   data->vm = vm_create_default(VCPU_ID, mempages, guest_code);

called here:

   if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
                   mem_size, slot_runtime)) {

where mempages is mem_size, which is declared as:

         uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;

but actually a better fix is just to pass a small fixed value (e.g. 
1024) to vm_create_default, since all other regions are added by hand.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-05-29 10:20   ` Paolo Bonzini
@ 2021-05-29 23:13     ` Maciej S. Szmigiero
  2021-06-02 23:07       ` Maciej S. Szmigiero
  0 siblings, 1 reply; 12+ messages in thread
From: Maciej S. Szmigiero @ 2021-05-29 23:13 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm

On 29.05.2021 12:20, Paolo Bonzini wrote:
> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>> The memory that is allocated in vm_create is already mapped close to
>>> GPA 0, because test_execute passes the requested memory to
>>> prepare_vm.  This causes overlapping memory regions and the
>>> test crashes.  For simplicity just move MEM_GPA higher.
>>>
>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>
>> I am not sure that I understand the issue correctly, is vm_create_default()
>> already reserving low GPAs (around 0x10000000) on some arches or run
>> environments?
> 
> It maps the number of pages you pass in the second argument, see
> vm_create.
> 
>    if (phy_pages != 0)
>      vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
>                                  0, 0, phy_pages, 0);
> 
> In this case:
> 
>    data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
> 
> called here:
> 
>    if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
>                    mem_size, slot_runtime)) {
> 
> where mempages is mem_size, which is declared as:
> 
>          uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
> 
> but actually a better fix is just to pass a small fixed value (e.g. 1024) to vm_create_default,
> since all other regions are added by hand

Yes, but the argument that is passed to vm_create_default() (mem_size
in the case of the test) is not passed as phy_pages to vm_create().
Rather, vm_create_with_vcpus() calculates some upper bound of extra
memory that is needed to cover that much guest memory (including for
its page tables).

The biggest possible mem_size from memslot_perf_test is 512 MiB + 1 page,
according to my calculations this results in phy_pages of 1029 (~4 MiB)
in the x86-64 case and around 1540 (~6 MiB) in the s390x case (here I am
not sure about the exact number, since s390x has some additional alignment
requirements).

Both values are well below 256 MiB (0x10000000UL), so I was wondering
what kind of circumstances can make these allocations collide
(maybe I am missing something in my analysis).

> 
> Paolo
> 

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-05-29 23:13     ` Maciej S. Szmigiero
@ 2021-06-02 23:07       ` Maciej S. Szmigiero
  2021-06-03  5:26         ` Duan, Zhenzhong
  0 siblings, 1 reply; 12+ messages in thread
From: Maciej S. Szmigiero @ 2021-06-02 23:07 UTC (permalink / raw)
  To: Paolo Bonzini, Zhenzhong Duan; +Cc: linux-kernel, kvm, Andrew Jones

On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
> On 29.05.2021 12:20, Paolo Bonzini wrote:
>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>>> The memory that is allocated in vm_create is already mapped close to
>>>> GPA 0, because test_execute passes the requested memory to
>>>> prepare_vm.  This causes overlapping memory regions and the
>>>> test crashes.  For simplicity just move MEM_GPA higher.
>>>>
>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>>
>>> I am not sure that I understand the issue correctly, is vm_create_default()
>>> already reserving low GPAs (around 0x10000000) on some arches or run
>>> environments?
>>
>> It maps the number of pages you pass in the second argument, see
>> vm_create.
>>
>>    if (phy_pages != 0)
>>      vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
>>                                  0, 0, phy_pages, 0);
>>
>> In this case:
>>
>>    data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
>>
>> called here:
>>
>>    if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
>>                    mem_size, slot_runtime)) {
>>
>> where mempages is mem_size, which is declared as:
>>
>>          uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
>>
>> but actually a better fix is just to pass a small fixed value (e.g. 1024) to vm_create_default,
>> since all other regions are added by hand
> 
> Yes, but the argument that is passed to vm_create_default() (mem_size
> in the case of the test) is not passed as phy_pages to vm_create().
> Rather, vm_create_with_vcpus() calculates some upper bound of extra
> memory that is needed to cover that much guest memory (including for
> its page tables).
> 
> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1 page,
> according to my calculations this results in phy_pages of 1029 (~4 MiB)
> in the x86-64 case and around 1540 (~6 MiB) in the s390x case (here I am
> not sure about the exact number, since s390x has some additional alignment
> requirements).
> 
> Both values are well below 256 MiB (0x10000000UL), so I was wondering
> what kind of circumstances can make these allocations collide
> (maybe I am missing something in my analysis).

I see now that there has been a patch merged last week called
"selftests: kvm: make allocation of extra memory take effect" by
Zhenzhong that now allocates also the whole memory size passed to
vm_create_default() (instead of just page tables for that much memory).

The commit message of this patch says that "perf_test_util and
kvm_page_table_test use it to alloc extra memory currently", however both
kvm_page_table_test and lib/perf_test_util framework explicitly add the
required memory allocation by doing a vm_userspace_mem_region_add() call
for the same memory size that they pass to vm_create_default().

So now they allocate this memory twice.

@Zhenzhong: did you notice improper operation of either kvm_page_table_test
or perf_test_util-based tests (demand_paging_test, dirty_log_perf_test,
memslot_modification_stress_test) before your patch?

They seem to work fine for me without the patch (and I guess other
people would have noticed earlier, too, if they were broken).

After this patch not only these tests allocate their memory twice but
it is harder to make vm_create_default() allocate the right amount
of memory for the page tables in cases where the test needs to
explicitly use vm_userspace_mem_region_add() for its allocations
(because it wants the allocation placed at a specific GPA or in a
specific memslot).

One has to basically open-code the page table size calculations from
vm_create_with_vcpus() in the particular test then, taking also into
account that vm_create_with_vcpus() will not only allocate the passed
memory size (calculated page tables size) but also behave like it was
allocating space for page tables for these page tables (even though
the passed memory size itself is supposed to cover them).

Due to the above, I suspect the previous behavior was, in fact, correct.

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-06-02 23:07       ` Maciej S. Szmigiero
@ 2021-06-03  5:26         ` Duan, Zhenzhong
  2021-06-03 12:37           ` Andrew Jones
  2021-06-03 13:05           ` [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test Maciej S. Szmigiero
  0 siblings, 2 replies; 12+ messages in thread
From: Duan, Zhenzhong @ 2021-06-03  5:26 UTC (permalink / raw)
  To: Maciej S. Szmigiero, Paolo Bonzini; +Cc: linux-kernel, kvm, Andrew Jones

> -----Original Message-----
> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> Sent: Thursday, June 3, 2021 7:07 AM
> To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
> <zhenzhong.duan@intel.com>
> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones
> <drjones@redhat.com>
> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
> memslot_perf_test
> 
> On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
> > On 29.05.2021 12:20, Paolo Bonzini wrote:
> >> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
> >>> On 28.05.2021 21:11, Paolo Bonzini wrote:
> >>>> The memory that is allocated in vm_create is already mapped close
> >>>> to GPA 0, because test_execute passes the requested memory to
> >>>> prepare_vm.  This causes overlapping memory regions and the test
> >>>> crashes.  For simplicity just move MEM_GPA higher.
> >>>>
> >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >>>
> >>> I am not sure that I understand the issue correctly, is
> >>> vm_create_default() already reserving low GPAs (around 0x10000000)
> >>> on some arches or run environments?
> >>
> >> It maps the number of pages you pass in the second argument, see
> >> vm_create.
> >>
> >>    if (phy_pages != 0)
> >>      vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> >>                                  0, 0, phy_pages, 0);
> >>
> >> In this case:
> >>
> >>    data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
> >>
> >> called here:
> >>
> >>    if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
> >>                    mem_size, slot_runtime)) {
> >>
> >> where mempages is mem_size, which is declared as:
> >>
> >>          uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
> >>
> >> but actually a better fix is just to pass a small fixed value (e.g.
> >> 1024) to vm_create_default, since all other regions are added by hand
> >
> > Yes, but the argument that is passed to vm_create_default() (mem_size
> > in the case of the test) is not passed as phy_pages to vm_create().
> > Rather, vm_create_with_vcpus() calculates some upper bound of extra
> > memory that is needed to cover that much guest memory (including for
> > its page tables).
> >
> > The biggest possible mem_size from memslot_perf_test is 512 MiB + 1
> > page, according to my calculations this results in phy_pages of 1029
> > (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x case
> > (here I am not sure about the exact number, since s390x has some
> > additional alignment requirements).
> >
> > Both values are well below 256 MiB (0x10000000UL), so I was wondering
> > what kind of circumstances can make these allocations collide (maybe I
> > am missing something in my analysis).
> 
> I see now that there has been a patch merged last week called
> "selftests: kvm: make allocation of extra memory take effect" by Zhenzhong
> that now allocates also the whole memory size passed to
> vm_create_default() (instead of just page tables for that much memory).
> 
> The commit message of this patch says that "perf_test_util and
> kvm_page_table_test use it to alloc extra memory currently", however both
> kvm_page_table_test and lib/perf_test_util framework explicitly add the
> required memory allocation by doing a vm_userspace_mem_region_add()
> call for the same memory size that they pass to vm_create_default().
> 
> So now they allocate this memory twice.
> 
> @Zhenzhong: did you notice improper operation of either
> kvm_page_table_test or perf_test_util-based tests (demand_paging_test,
> dirty_log_perf_test,
> memslot_modification_stress_test) before your patch?
No

> 
> They seem to work fine for me without the patch (and I guess other people
> would have noticed earlier, too, if they were broken).
> 
> After this patch not only these tests allocate their memory twice but it is
> harder to make vm_create_default() allocate the right amount of memory for
> the page tables in cases where the test needs to explicitly use
> vm_userspace_mem_region_add() for its allocations (because it wants the
> allocation placed at a specific GPA or in a specific memslot).
> 
> One has to basically open-code the page table size calculations from
> vm_create_with_vcpus() in the particular test then, taking also into account
> that vm_create_with_vcpus() will not only allocate the passed memory size
> (calculated page tables size) but also behave like it was allocating space for
> page tables for these page tables (even though the passed memory size itself
> is supposed to cover them).
Looks we have different understanding to the parameter extra_mem_pages of vm_create_default().

In your usage, extra_mem_pages is only used for page table calculations, real extra memory allocation
happens in the extra call of vm_userspace_mem_region_add().

In my understanding, extra_mem_pages is used for a VM who wants a custom memory size in slot0, 
rather than the default DEFAULT_GUEST_PHY_PAGES size.

I understood your comments and do agree that my patch bring some trouble to your code, sorry for that.
I'm fine to revert that patch and I think it's better to let the maintainers to decide what extra_mem_pages
Is used for.

Thanks
Zhenzhong
> 
> Due to the above, I suspect the previous behavior was, in fact, correct.
> 
> Thanks,
> Maciej

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-06-03  5:26         ` Duan, Zhenzhong
@ 2021-06-03 12:37           ` Andrew Jones
  2021-06-03 13:05             ` Maciej S. Szmigiero
  2021-06-03 13:05           ` [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test Maciej S. Szmigiero
  1 sibling, 1 reply; 12+ messages in thread
From: Andrew Jones @ 2021-06-03 12:37 UTC (permalink / raw)
  To: Duan, Zhenzhong; +Cc: Maciej S. Szmigiero, Paolo Bonzini, linux-kernel, kvm

On Thu, Jun 03, 2021 at 05:26:33AM +0000, Duan, Zhenzhong wrote:
> > -----Original Message-----
> > From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> > Sent: Thursday, June 3, 2021 7:07 AM
> > To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
> > <zhenzhong.duan@intel.com>
> > Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones
> > <drjones@redhat.com>
> > Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
> > memslot_perf_test
> > 
> > On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
> > > On 29.05.2021 12:20, Paolo Bonzini wrote:
> > >> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
> > >>> On 28.05.2021 21:11, Paolo Bonzini wrote:
> > >>>> The memory that is allocated in vm_create is already mapped close
> > >>>> to GPA 0, because test_execute passes the requested memory to
> > >>>> prepare_vm.  This causes overlapping memory regions and the test
> > >>>> crashes.  For simplicity just move MEM_GPA higher.
> > >>>>
> > >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > >>>
> > >>> I am not sure that I understand the issue correctly, is
> > >>> vm_create_default() already reserving low GPAs (around 0x10000000)
> > >>> on some arches or run environments?
> > >>
> > >> It maps the number of pages you pass in the second argument, see
> > >> vm_create.
> > >>
> > >>    if (phy_pages != 0)
> > >>      vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> > >>                                  0, 0, phy_pages, 0);
> > >>
> > >> In this case:
> > >>
> > >>    data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
> > >>
> > >> called here:
> > >>
> > >>    if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
> > >>                    mem_size, slot_runtime)) {
> > >>
> > >> where mempages is mem_size, which is declared as:
> > >>
> > >>          uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
> > >>
> > >> but actually a better fix is just to pass a small fixed value (e.g.
> > >> 1024) to vm_create_default, since all other regions are added by hand
> > >
> > > Yes, but the argument that is passed to vm_create_default() (mem_size
> > > in the case of the test) is not passed as phy_pages to vm_create().
> > > Rather, vm_create_with_vcpus() calculates some upper bound of extra
> > > memory that is needed to cover that much guest memory (including for
> > > its page tables).
> > >
> > > The biggest possible mem_size from memslot_perf_test is 512 MiB + 1
> > > page, according to my calculations this results in phy_pages of 1029
> > > (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x case
> > > (here I am not sure about the exact number, since s390x has some
> > > additional alignment requirements).
> > >
> > > Both values are well below 256 MiB (0x10000000UL), so I was wondering
> > > what kind of circumstances can make these allocations collide (maybe I
> > > am missing something in my analysis).
> > 
> > I see now that there has been a patch merged last week called
> > "selftests: kvm: make allocation of extra memory take effect" by Zhenzhong
> > that now allocates also the whole memory size passed to
> > vm_create_default() (instead of just page tables for that much memory).
> > 
> > The commit message of this patch says that "perf_test_util and
> > kvm_page_table_test use it to alloc extra memory currently", however both
> > kvm_page_table_test and lib/perf_test_util framework explicitly add the
> > required memory allocation by doing a vm_userspace_mem_region_add()
> > call for the same memory size that they pass to vm_create_default().
> > 
> > So now they allocate this memory twice.
> > 
> > @Zhenzhong: did you notice improper operation of either
> > kvm_page_table_test or perf_test_util-based tests (demand_paging_test,
> > dirty_log_perf_test,
> > memslot_modification_stress_test) before your patch?
> No
> 
> > 
> > They seem to work fine for me without the patch (and I guess other people
> > would have noticed earlier, too, if they were broken).
> > 
> > After this patch not only these tests allocate their memory twice but it is
> > harder to make vm_create_default() allocate the right amount of memory for
> > the page tables in cases where the test needs to explicitly use
> > vm_userspace_mem_region_add() for its allocations (because it wants the
> > allocation placed at a specific GPA or in a specific memslot).
> > 
> > One has to basically open-code the page table size calculations from
> > vm_create_with_vcpus() in the particular test then, taking also into account
> > that vm_create_with_vcpus() will not only allocate the passed memory size
> > (calculated page tables size) but also behave like it was allocating space for
> > page tables for these page tables (even though the passed memory size itself
> > is supposed to cover them).
> Looks we have different understanding to the parameter extra_mem_pages of vm_create_default().
> 
> In your usage, extra_mem_pages is only used for page table calculations, real extra memory allocation
> happens in the extra call of vm_userspace_mem_region_add().

Yes, this is the meaning that kvm selftests has always had for
extra_mem_pages of vm_create_default(). If we'd rather have a different
meaning, that's fine, but we need to change all the callers of the
function as well.

If we decide to leave vm_create_default() the way it was by reverting this
patch, then maybe we should consider renaming the parameter and/or
documenting the function.

Thanks,
drew

> 
> In my understanding, extra_mem_pages is used for a VM who wants a custom memory size in slot0, 
> rather than the default DEFAULT_GUEST_PHY_PAGES size.
> 
> I understood your comments and do agree that my patch bring some trouble to your code, sorry for that.
> I'm fine to revert that patch and I think it's better to let the maintainers to decide what extra_mem_pages
> Is used for.
> 
> Thanks
> Zhenzhong
> > 
> > Due to the above, I suspect the previous behavior was, in fact, correct.
> > 
> > Thanks,
> > Maciej


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-06-03 12:37           ` Andrew Jones
@ 2021-06-03 13:05             ` Maciej S. Szmigiero
  2021-06-04  3:35               ` Duan, Zhenzhong
  0 siblings, 1 reply; 12+ messages in thread
From: Maciej S. Szmigiero @ 2021-06-03 13:05 UTC (permalink / raw)
  To: Andrew Jones; +Cc: Paolo Bonzini, linux-kernel, kvm, Duan, Zhenzhong

On 03.06.2021 14:37, Andrew Jones wrote:
> On Thu, Jun 03, 2021 at 05:26:33AM +0000, Duan, Zhenzhong wrote:
>>> -----Original Message-----
>>> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
>>> Sent: Thursday, June 3, 2021 7:07 AM
>>> To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
>>> <zhenzhong.duan@intel.com>
>>> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones
>>> <drjones@redhat.com>
>>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
>>> memslot_perf_test
>>>
>>> On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
>>>> On 29.05.2021 12:20, Paolo Bonzini wrote:
>>>>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>>>>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>>>>>> The memory that is allocated in vm_create is already mapped close
>>>>>>> to GPA 0, because test_execute passes the requested memory to
>>>>>>> prepare_vm.  This causes overlapping memory regions and the test
>>>>>>> crashes.  For simplicity just move MEM_GPA higher.
>>>>>>>
>>>>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>>>>>
>>>>>> I am not sure that I understand the issue correctly, is
>>>>>> vm_create_default() already reserving low GPAs (around 0x10000000)
>>>>>> on some arches or run environments?
>>>>>
>>>>> It maps the number of pages you pass in the second argument, see
>>>>> vm_create.
>>>>>
>>>>>     if (phy_pages != 0)
>>>>>       vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
>>>>>                                   0, 0, phy_pages, 0);
>>>>>
>>>>> In this case:
>>>>>
>>>>>     data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
>>>>>
>>>>> called here:
>>>>>
>>>>>     if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
>>>>>                     mem_size, slot_runtime)) {
>>>>>
>>>>> where mempages is mem_size, which is declared as:
>>>>>
>>>>>           uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
>>>>>
>>>>> but actually a better fix is just to pass a small fixed value (e.g.
>>>>> 1024) to vm_create_default, since all other regions are added by hand
>>>>
>>>> Yes, but the argument that is passed to vm_create_default() (mem_size
>>>> in the case of the test) is not passed as phy_pages to vm_create().
>>>> Rather, vm_create_with_vcpus() calculates some upper bound of extra
>>>> memory that is needed to cover that much guest memory (including for
>>>> its page tables).
>>>>
>>>> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1
>>>> page, according to my calculations this results in phy_pages of 1029
>>>> (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x case
>>>> (here I am not sure about the exact number, since s390x has some
>>>> additional alignment requirements).
>>>>
>>>> Both values are well below 256 MiB (0x10000000UL), so I was wondering
>>>> what kind of circumstances can make these allocations collide (maybe I
>>>> am missing something in my analysis).
>>>
>>> I see now that there has been a patch merged last week called
>>> "selftests: kvm: make allocation of extra memory take effect" by Zhenzhong
>>> that now allocates also the whole memory size passed to
>>> vm_create_default() (instead of just page tables for that much memory).
>>>
>>> The commit message of this patch says that "perf_test_util and
>>> kvm_page_table_test use it to alloc extra memory currently", however both
>>> kvm_page_table_test and lib/perf_test_util framework explicitly add the
>>> required memory allocation by doing a vm_userspace_mem_region_add()
>>> call for the same memory size that they pass to vm_create_default().
>>>
>>> So now they allocate this memory twice.
>>>
>>> @Zhenzhong: did you notice improper operation of either
>>> kvm_page_table_test or perf_test_util-based tests (demand_paging_test,
>>> dirty_log_perf_test,
>>> memslot_modification_stress_test) before your patch?
>> No
>>
>>>
>>> They seem to work fine for me without the patch (and I guess other people
>>> would have noticed earlier, too, if they were broken).
>>>
>>> After this patch not only these tests allocate their memory twice but it is
>>> harder to make vm_create_default() allocate the right amount of memory for
>>> the page tables in cases where the test needs to explicitly use
>>> vm_userspace_mem_region_add() for its allocations (because it wants the
>>> allocation placed at a specific GPA or in a specific memslot).
>>>
>>> One has to basically open-code the page table size calculations from
>>> vm_create_with_vcpus() in the particular test then, taking also into account
>>> that vm_create_with_vcpus() will not only allocate the passed memory size
>>> (calculated page tables size) but also behave like it was allocating space for
>>> page tables for these page tables (even though the passed memory size itself
>>> is supposed to cover them).
>> Looks we have different understanding to the parameter extra_mem_pages of vm_create_default().
>>
>> In your usage, extra_mem_pages is only used for page table calculations, real extra memory allocation
>> happens in the extra call of vm_userspace_mem_region_add().
> 
> Yes, this is the meaning that kvm selftests has always had for
> extra_mem_pages of vm_create_default(). If we'd rather have a different
> meaning, that's fine, but we need to change all the callers of the
> function as well.

If we change the meaning of extra_mem_pages (keep the patch) it would be
good to still have an additional parameter to vm_create_with_vcpus() for
tests that have to allocate their memory on their own via
vm_userspace_mem_region_add() for vm_create_with_vcpus() to just allocate
the page tables for these manual allocations.
Or a helper to calculate the required extra_mem_pages for them.
  
> If we decide to leave vm_create_default() the way it was by reverting this
> patch, then maybe we should consider renaming the parameter and/or
> documenting the function.

Adding a descriptive comment (and possibly renaming the parameter) seems
like a much simpler solution to me that adapting these tests (and possibly
adding the parameter or helper described above for them).

> Thanks,
> drew

Thanks,
Maciej

>>
>> In my understanding, extra_mem_pages is used for a VM who wants a custom memory size in slot0,
>> rather than the default DEFAULT_GUEST_PHY_PAGES size.
>>
>> I understood your comments and do agree that my patch bring some trouble to your code, sorry for that.
>> I'm fine to revert that patch and I think it's better to let the maintainers to decide what extra_mem_pages
>> Is used for.
>>
>> Thanks
>> Zhenzhong
>>>
>>> Due to the above, I suspect the previous behavior was, in fact, correct.
>>>
>>> Thanks,
>>> Maciej
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-06-03  5:26         ` Duan, Zhenzhong
  2021-06-03 12:37           ` Andrew Jones
@ 2021-06-03 13:05           ` Maciej S. Szmigiero
  1 sibling, 0 replies; 12+ messages in thread
From: Maciej S. Szmigiero @ 2021-06-03 13:05 UTC (permalink / raw)
  To: Duan, Zhenzhong; +Cc: linux-kernel, kvm, Andrew Jones, Paolo Bonzini

On 03.06.2021 07:26, Duan, Zhenzhong wrote:
>> -----Original Message-----
>> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
>> Sent: Thursday, June 3, 2021 7:07 AM
>> To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
>> <zhenzhong.duan@intel.com>
>> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones
>> <drjones@redhat.com>
>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
>> memslot_perf_test
>>
>> On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
>>> On 29.05.2021 12:20, Paolo Bonzini wrote:
>>>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>>>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>>>>> The memory that is allocated in vm_create is already mapped close
>>>>>> to GPA 0, because test_execute passes the requested memory to
>>>>>> prepare_vm.  This causes overlapping memory regions and the test
>>>>>> crashes.  For simplicity just move MEM_GPA higher.
>>>>>>
>>>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>>>>
>>>>> I am not sure that I understand the issue correctly, is
>>>>> vm_create_default() already reserving low GPAs (around 0x10000000)
>>>>> on some arches or run environments?
>>>>
>>>> It maps the number of pages you pass in the second argument, see
>>>> vm_create.
>>>>
>>>>     if (phy_pages != 0)
>>>>       vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
>>>>                                   0, 0, phy_pages, 0);
>>>>
>>>> In this case:
>>>>
>>>>     data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
>>>>
>>>> called here:
>>>>
>>>>     if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
>>>>                     mem_size, slot_runtime)) {
>>>>
>>>> where mempages is mem_size, which is declared as:
>>>>
>>>>           uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
>>>>
>>>> but actually a better fix is just to pass a small fixed value (e.g.
>>>> 1024) to vm_create_default, since all other regions are added by hand
>>>
>>> Yes, but the argument that is passed to vm_create_default() (mem_size
>>> in the case of the test) is not passed as phy_pages to vm_create().
>>> Rather, vm_create_with_vcpus() calculates some upper bound of extra
>>> memory that is needed to cover that much guest memory (including for
>>> its page tables).
>>>
>>> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1
>>> page, according to my calculations this results in phy_pages of 1029
>>> (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x case
>>> (here I am not sure about the exact number, since s390x has some
>>> additional alignment requirements).
>>>
>>> Both values are well below 256 MiB (0x10000000UL), so I was wondering
>>> what kind of circumstances can make these allocations collide (maybe I
>>> am missing something in my analysis).
>>
>> I see now that there has been a patch merged last week called
>> "selftests: kvm: make allocation of extra memory take effect" by Zhenzhong
>> that now allocates also the whole memory size passed to
>> vm_create_default() (instead of just page tables for that much memory).
>>
>> The commit message of this patch says that "perf_test_util and
>> kvm_page_table_test use it to alloc extra memory currently", however both
>> kvm_page_table_test and lib/perf_test_util framework explicitly add the
>> required memory allocation by doing a vm_userspace_mem_region_add()
>> call for the same memory size that they pass to vm_create_default().
>>
>> So now they allocate this memory twice.
>>
>> @Zhenzhong: did you notice improper operation of either
>> kvm_page_table_test or perf_test_util-based tests (demand_paging_test,
>> dirty_log_perf_test,
>> memslot_modification_stress_test) before your patch?
> No
> 
>>
>> They seem to work fine for me without the patch (and I guess other people
>> would have noticed earlier, too, if they were broken).
>>
>> After this patch not only these tests allocate their memory twice but it is
>> harder to make vm_create_default() allocate the right amount of memory for
>> the page tables in cases where the test needs to explicitly use
>> vm_userspace_mem_region_add() for its allocations (because it wants the
>> allocation placed at a specific GPA or in a specific memslot).
>>
>> One has to basically open-code the page table size calculations from
>> vm_create_with_vcpus() in the particular test then, taking also into account
>> that vm_create_with_vcpus() will not only allocate the passed memory size
>> (calculated page tables size) but also behave like it was allocating space for
>> page tables for these page tables (even though the passed memory size itself
>> is supposed to cover them).
> Looks we have different understanding to the parameter extra_mem_pages of vm_create_default().
> 
> In your usage, extra_mem_pages is only used for page table calculations, real extra memory allocation
> happens in the extra call of vm_userspace_mem_region_add().
> 
> In my understanding, extra_mem_pages is used for a VM who wants a custom memory size in slot0,
> rather than the default DEFAULT_GUEST_PHY_PAGES size.
> 
> I understood your comments and do agree that my patch bring some trouble to your code, sorry for that.
> I'm fine to revert that patch and I think it's better to let the maintainers to decide what extra_mem_pages
> Is used for.

No problem, I just noticed the inconsistent behavior.
I've coded memslot_perf_test to the old one (like other tests are) and
was surprised there were guest memory allocation collisions.

> Thanks
> Zhenzhong

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
  2021-06-03 13:05             ` Maciej S. Szmigiero
@ 2021-06-04  3:35               ` Duan, Zhenzhong
  2021-06-04 16:49                 ` selftests: kvm: allocating extra mem in slot 0 (Was: Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test) Maciej S. Szmigiero
  0 siblings, 1 reply; 12+ messages in thread
From: Duan, Zhenzhong @ 2021-06-04  3:35 UTC (permalink / raw)
  To: Maciej S. Szmigiero, Andrew Jones; +Cc: Paolo Bonzini, linux-kernel, kvm



> -----Original Message-----
> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> Sent: Thursday, June 3, 2021 9:06 PM
> To: Andrew Jones <drjones@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; Duan, Zhenzhong <zhenzhong.duan@intel.com>
> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
> memslot_perf_test
> 
> On 03.06.2021 14:37, Andrew Jones wrote:
> > On Thu, Jun 03, 2021 at 05:26:33AM +0000, Duan, Zhenzhong wrote:
> >>> -----Original Message-----
> >>> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> >>> Sent: Thursday, June 3, 2021 7:07 AM
> >>> To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
> >>> <zhenzhong.duan@intel.com>
> >>> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones
> >>> <drjones@redhat.com>
> >>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
> >>> memslot_perf_test
> >>>
> >>> On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
> >>>> On 29.05.2021 12:20, Paolo Bonzini wrote:
> >>>>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
> >>>>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
> >>>>>>> The memory that is allocated in vm_create is already mapped
> >>>>>>> close to GPA 0, because test_execute passes the requested memory
> >>>>>>> to prepare_vm.  This causes overlapping memory regions and the
> >>>>>>> test crashes.  For simplicity just move MEM_GPA higher.
> >>>>>>>
> >>>>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >>>>>>
> >>>>>> I am not sure that I understand the issue correctly, is
> >>>>>> vm_create_default() already reserving low GPAs (around
> >>>>>> 0x10000000) on some arches or run environments?
> >>>>>
> >>>>> It maps the number of pages you pass in the second argument, see
> >>>>> vm_create.
> >>>>>
> >>>>>     if (phy_pages != 0)
> >>>>>       vm_userspace_mem_region_add(vm,
> VM_MEM_SRC_ANONYMOUS,
> >>>>>                                   0, 0, phy_pages, 0);
> >>>>>
> >>>>> In this case:
> >>>>>
> >>>>>     data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
> >>>>>
> >>>>> called here:
> >>>>>
> >>>>>     if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
> >>>>>                     mem_size, slot_runtime)) {
> >>>>>
> >>>>> where mempages is mem_size, which is declared as:
> >>>>>
> >>>>>           uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
> >>>>>
> >>>>> but actually a better fix is just to pass a small fixed value (e.g.
> >>>>> 1024) to vm_create_default, since all other regions are added by
> >>>>> hand
> >>>>
> >>>> Yes, but the argument that is passed to vm_create_default()
> >>>> (mem_size in the case of the test) is not passed as phy_pages to
> vm_create().
> >>>> Rather, vm_create_with_vcpus() calculates some upper bound of extra
> >>>> memory that is needed to cover that much guest memory (including
> >>>> for its page tables).
> >>>>
> >>>> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1
> >>>> page, according to my calculations this results in phy_pages of
> >>>> 1029
> >>>> (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x
> >>>> case (here I am not sure about the exact number, since s390x has
> >>>> some additional alignment requirements).
> >>>>
> >>>> Both values are well below 256 MiB (0x10000000UL), so I was
> >>>> wondering what kind of circumstances can make these allocations
> >>>> collide (maybe I am missing something in my analysis).
> >>>
> >>> I see now that there has been a patch merged last week called
> >>> "selftests: kvm: make allocation of extra memory take effect" by
> >>> Zhenzhong that now allocates also the whole memory size passed to
> >>> vm_create_default() (instead of just page tables for that much memory).
> >>>
> >>> The commit message of this patch says that "perf_test_util and
> >>> kvm_page_table_test use it to alloc extra memory currently", however
> >>> both kvm_page_table_test and lib/perf_test_util framework explicitly
> >>> add the required memory allocation by doing a
> >>> vm_userspace_mem_region_add() call for the same memory size that
> they pass to vm_create_default().
> >>>
> >>> So now they allocate this memory twice.
> >>>
> >>> @Zhenzhong: did you notice improper operation of either
> >>> kvm_page_table_test or perf_test_util-based tests
> >>> (demand_paging_test, dirty_log_perf_test,
> >>> memslot_modification_stress_test) before your patch?
> >> No
> >>
> >>>
> >>> They seem to work fine for me without the patch (and I guess other
> >>> people would have noticed earlier, too, if they were broken).
> >>>
> >>> After this patch not only these tests allocate their memory twice
> >>> but it is harder to make vm_create_default() allocate the right
> >>> amount of memory for the page tables in cases where the test needs
> >>> to explicitly use
> >>> vm_userspace_mem_region_add() for its allocations (because it wants
> >>> the allocation placed at a specific GPA or in a specific memslot).
> >>>
> >>> One has to basically open-code the page table size calculations from
> >>> vm_create_with_vcpus() in the particular test then, taking also into
> >>> account that vm_create_with_vcpus() will not only allocate the
> >>> passed memory size (calculated page tables size) but also behave
> >>> like it was allocating space for page tables for these page tables
> >>> (even though the passed memory size itself is supposed to cover them).
> >> Looks we have different understanding to the parameter
> extra_mem_pages of vm_create_default().
> >>
> >> In your usage, extra_mem_pages is only used for page table
> >> calculations, real extra memory allocation happens in the extra call of
> vm_userspace_mem_region_add().
> >
> > Yes, this is the meaning that kvm selftests has always had for
> > extra_mem_pages of vm_create_default(). If we'd rather have a
> > different meaning, that's fine, but we need to change all the callers
> > of the function as well.
> 
> If we change the meaning of extra_mem_pages (keep the patch) it would be
> good to still have an additional parameter to vm_create_with_vcpus() for
> tests that have to allocate their memory on their own via
> vm_userspace_mem_region_add() for vm_create_with_vcpus() to just
> allocate the page tables for these manual allocations.
> Or a helper to calculate the required extra_mem_pages for them.
> 
> > If we decide to leave vm_create_default() the way it was by reverting
> > this patch, then maybe we should consider renaming the parameter
> > and/or documenting the function.
> 
> Adding a descriptive comment (and possibly renaming the parameter) seems
> like a much simpler solution to me that adapting these tests (and possibly
> adding the parameter or helper described above for them).

Agree, I prefer the simpler way.

I also think of an idea for custom slot0 memory, keep extra_mem_pages the original way, adding a global slot0_pages for custom slot0 memory. Maybe not a good choice as it's not thread safe, just for discussion. That is:
1. revert "selftests: kvm: make allocation of extra memory take effect"
2. add below patch
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -280,6 +280,9 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
                                 void *guest_code);

+struct kvm_vm *vm_create_slot0(uint32_t vcpuid, uint64_t slot0_mem_pages,
+                              uint64_t extra_mem_pages, void *guest_code);
+
 /* Same as vm_create_default, but can be used for more than one vcpu */
 struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_mem_pages,
                                            uint32_t num_percpu_pages, void *guest_code,
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 63418df921f0..56b1225865d5 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -196,6 +196,7 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
               "Missing new mode params?");

+uint64_t slot0_pages = DEFAULT_GUEST_PHY_PAGES;
 /*
  * VM Create
  *
@@ -319,8 +320,8 @@ struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
         * than N/x*2.
         */
        uint64_t vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus;
-       uint64_t extra_pg_pages = (extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2;
-       uint64_t pages = DEFAULT_GUEST_PHY_PAGES + vcpu_pages + extra_pg_pages;
+       uint64_t extra_pg_pages = (slot0_pages + extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2;
+       uint64_t pages = slot0_pages + vcpu_pages + extra_pg_pages;
        struct kvm_vm *vm;
        int i;

@@ -358,9 +359,18 @@ struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_me
                                    num_percpu_pages, guest_code, vcpuids);
 }

+struct kvm_vm *vm_create_slot0(uint32_t vcpuid, uint64_t slot0_mem_pages,
+                                      uint64_t extra_mem_pages, void *guest_code)
+{
+       slot0_pages = slot0_mem_pages;
+       return vm_create_default_with_vcpus(1, extra_mem_pages, 0, guest_code,
+                                           (uint32_t []){ vcpuid });
+}
+
 struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
                                 void *guest_code)
 {
+       slot0_pages = DEFAULT_GUEST_PHY_PAGES;
        return vm_create_default_with_vcpus(1, extra_mem_pages, 0, guest_code,
                                            (uint32_t []){ vcpuid });
 }
@@ -626,6 +636,9 @@ void kvm_vm_free(struct kvm_vm *vmp)

        /* Free the structure describing the VM. */
        free(vmp);
+
+       /* Restore slot0 memory to default size for next VM creation */
+       slot0_pages = DEFAULT_GUEST_PHY_PAGES;
 }

 /*

Thanks
Zhenzhong

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* selftests: kvm: allocating extra mem in slot 0 (Was: Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test)
  2021-06-04  3:35               ` Duan, Zhenzhong
@ 2021-06-04 16:49                 ` Maciej S. Szmigiero
  2021-06-08  3:20                   ` Duan, Zhenzhong
  0 siblings, 1 reply; 12+ messages in thread
From: Maciej S. Szmigiero @ 2021-06-04 16:49 UTC (permalink / raw)
  To: Duan, Zhenzhong; +Cc: Paolo Bonzini, linux-kernel, kvm, Andrew Jones

On 04.06.2021 05:35, Duan, Zhenzhong wrote:
>> -----Original Message-----
>> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
>> Sent: Thursday, June 3, 2021 9:06 PM
>> To: Andrew Jones <drjones@redhat.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>; linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org; Duan, Zhenzhong <zhenzhong.duan@intel.com>
>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
>> memslot_perf_test
>>
>> On 03.06.2021 14:37, Andrew Jones wrote:
>>> On Thu, Jun 03, 2021 at 05:26:33AM +0000, Duan, Zhenzhong wrote:
>>>>> -----Original Message-----
>>>>> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
>>>>> Sent: Thursday, June 3, 2021 7:07 AM
>>>>> To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
>>>>> <zhenzhong.duan@intel.com>
>>>>> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones
>>>>> <drjones@redhat.com>
>>>>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
>>>>> memslot_perf_test
>>>>>
>>>>> On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
>>>>>> On 29.05.2021 12:20, Paolo Bonzini wrote:
>>>>>>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>>>>>>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>>>>>>>> The memory that is allocated in vm_create is already mapped
>>>>>>>>> close to GPA 0, because test_execute passes the requested memory
>>>>>>>>> to prepare_vm.  This causes overlapping memory regions and the
>>>>>>>>> test crashes.  For simplicity just move MEM_GPA higher.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>>>>>>>
>>>>>>>> I am not sure that I understand the issue correctly, is
>>>>>>>> vm_create_default() already reserving low GPAs (around
>>>>>>>> 0x10000000) on some arches or run environments?
>>>>>>>
>>>>>>> It maps the number of pages you pass in the second argument, see
>>>>>>> vm_create.
>>>>>>>
>>>>>>>      if (phy_pages != 0)
>>>>>>>        vm_userspace_mem_region_add(vm,
>> VM_MEM_SRC_ANONYMOUS,
>>>>>>>                                    0, 0, phy_pages, 0);
>>>>>>>
>>>>>>> In this case:
>>>>>>>
>>>>>>>      data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
>>>>>>>
>>>>>>> called here:
>>>>>>>
>>>>>>>      if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
>>>>>>>                      mem_size, slot_runtime)) {
>>>>>>>
>>>>>>> where mempages is mem_size, which is declared as:
>>>>>>>
>>>>>>>            uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
>>>>>>>
>>>>>>> but actually a better fix is just to pass a small fixed value (e.g.
>>>>>>> 1024) to vm_create_default, since all other regions are added by
>>>>>>> hand
>>>>>>
>>>>>> Yes, but the argument that is passed to vm_create_default()
>>>>>> (mem_size in the case of the test) is not passed as phy_pages to
>> vm_create().
>>>>>> Rather, vm_create_with_vcpus() calculates some upper bound of extra
>>>>>> memory that is needed to cover that much guest memory (including
>>>>>> for its page tables).
>>>>>>
>>>>>> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1
>>>>>> page, according to my calculations this results in phy_pages of
>>>>>> 1029
>>>>>> (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x
>>>>>> case (here I am not sure about the exact number, since s390x has
>>>>>> some additional alignment requirements).
>>>>>>
>>>>>> Both values are well below 256 MiB (0x10000000UL), so I was
>>>>>> wondering what kind of circumstances can make these allocations
>>>>>> collide (maybe I am missing something in my analysis).
>>>>>
>>>>> I see now that there has been a patch merged last week called
>>>>> "selftests: kvm: make allocation of extra memory take effect" by
>>>>> Zhenzhong that now allocates also the whole memory size passed to
>>>>> vm_create_default() (instead of just page tables for that much memory).
>>>>>
>>>>> The commit message of this patch says that "perf_test_util and
>>>>> kvm_page_table_test use it to alloc extra memory currently", however
>>>>> both kvm_page_table_test and lib/perf_test_util framework explicitly
>>>>> add the required memory allocation by doing a
>>>>> vm_userspace_mem_region_add() call for the same memory size that
>> they pass to vm_create_default().
>>>>>
>>>>> So now they allocate this memory twice.
>>>>>
>>>>> @Zhenzhong: did you notice improper operation of either
>>>>> kvm_page_table_test or perf_test_util-based tests
>>>>> (demand_paging_test, dirty_log_perf_test,
>>>>> memslot_modification_stress_test) before your patch?
>>>> No
>>>>
>>>>>
>>>>> They seem to work fine for me without the patch (and I guess other
>>>>> people would have noticed earlier, too, if they were broken).
>>>>>
>>>>> After this patch not only these tests allocate their memory twice
>>>>> but it is harder to make vm_create_default() allocate the right
>>>>> amount of memory for the page tables in cases where the test needs
>>>>> to explicitly use
>>>>> vm_userspace_mem_region_add() for its allocations (because it wants
>>>>> the allocation placed at a specific GPA or in a specific memslot).
>>>>>
>>>>> One has to basically open-code the page table size calculations from
>>>>> vm_create_with_vcpus() in the particular test then, taking also into
>>>>> account that vm_create_with_vcpus() will not only allocate the
>>>>> passed memory size (calculated page tables size) but also behave
>>>>> like it was allocating space for page tables for these page tables
>>>>> (even though the passed memory size itself is supposed to cover them).
>>>> Looks we have different understanding to the parameter
>> extra_mem_pages of vm_create_default().
>>>>
>>>> In your usage, extra_mem_pages is only used for page table
>>>> calculations, real extra memory allocation happens in the extra call of
>> vm_userspace_mem_region_add().
>>>
>>> Yes, this is the meaning that kvm selftests has always had for
>>> extra_mem_pages of vm_create_default(). If we'd rather have a
>>> different meaning, that's fine, but we need to change all the callers
>>> of the function as well.
>>
>> If we change the meaning of extra_mem_pages (keep the patch) it would be
>> good to still have an additional parameter to vm_create_with_vcpus() for
>> tests that have to allocate their memory on their own via
>> vm_userspace_mem_region_add() for vm_create_with_vcpus() to just
>> allocate the page tables for these manual allocations.
>> Or a helper to calculate the required extra_mem_pages for them.
>>
>>> If we decide to leave vm_create_default() the way it was by reverting
>>> this patch, then maybe we should consider renaming the parameter
>>> and/or documenting the function.
>>
>> Adding a descriptive comment (and possibly renaming the parameter) seems
>> like a much simpler solution to me that adapting these tests (and possibly
>> adding the parameter or helper described above for them).
> 
> Agree, I prefer the simpler way.
> 
> I also think of an idea for custom slot0 memory, keep extra_mem_pages the original way, adding a global slot0_pages for custom slot0 memory. Maybe not a good choice as it's not thread safe, just for discussion. That is:
> 1. revert "selftests: kvm: make allocation of extra memory take effect"
> 2. add below patch
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -280,6 +280,9 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
>   struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>                                   void *guest_code);
> 
> +struct kvm_vm *vm_create_slot0(uint32_t vcpuid, uint64_t slot0_mem_pages,
> +                              uint64_t extra_mem_pages, void *guest_code);
> +
>   /* Same as vm_create_default, but can be used for more than one vcpu */
>   struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_mem_pages,
>                                              uint32_t num_percpu_pages, void *guest_code,
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 63418df921f0..56b1225865d5 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -196,6 +196,7 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
>   _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
>                 "Missing new mode params?");
> 
> +uint64_t slot0_pages = DEFAULT_GUEST_PHY_PAGES;
>   /*
>    * VM Create
>    *
> @@ -319,8 +320,8 @@ struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
>           * than N/x*2.
>           */
>          uint64_t vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus;
> -       uint64_t extra_pg_pages = (extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2;
> -       uint64_t pages = DEFAULT_GUEST_PHY_PAGES + vcpu_pages + extra_pg_pages;
> +       uint64_t extra_pg_pages = (slot0_pages + extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2;
> +       uint64_t pages = slot0_pages + vcpu_pages + extra_pg_pages;
>          struct kvm_vm *vm;
>          int i;
> 
> @@ -358,9 +359,18 @@ struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_me
>                                      num_percpu_pages, guest_code, vcpuids);
>   }
> 
> +struct kvm_vm *vm_create_slot0(uint32_t vcpuid, uint64_t slot0_mem_pages,
> +                                      uint64_t extra_mem_pages, void *guest_code)
> +{
> +       slot0_pages = slot0_mem_pages;
> +       return vm_create_default_with_vcpus(1, extra_mem_pages, 0, guest_code,
> +                                           (uint32_t []){ vcpuid });
> +}
> +
>   struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>                                   void *guest_code)
>   {
> +       slot0_pages = DEFAULT_GUEST_PHY_PAGES;
>          return vm_create_default_with_vcpus(1, extra_mem_pages, 0, guest_code,
>                                              (uint32_t []){ vcpuid });
>   }
> @@ -626,6 +636,9 @@ void kvm_vm_free(struct kvm_vm *vmp)
> 
>          /* Free the structure describing the VM. */
>          free(vmp);
> +
> +       /* Restore slot0 memory to default size for next VM creation */
> +       slot0_pages = DEFAULT_GUEST_PHY_PAGES;
>   }
> 
>   /*

In terms of thread safety a quick glance at current tests seems to
suggest that none of them create VMs from anything but their main
threads (although s90x diag318 handler for sync_regs_test does some
suspicious stuff).

But I think a better solution than adding a global variable as an implicit
parameter to vm_create_with_vcpus() is to simply add an extra explicit
parameter to this function - it has just 3 callers that need to be
(trivially) adapted then.

> Thanks
> Zhenzhong
> 

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: selftests: kvm: allocating extra mem in slot 0 (Was: Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test)
  2021-06-04 16:49                 ` selftests: kvm: allocating extra mem in slot 0 (Was: Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test) Maciej S. Szmigiero
@ 2021-06-08  3:20                   ` Duan, Zhenzhong
  0 siblings, 0 replies; 12+ messages in thread
From: Duan, Zhenzhong @ 2021-06-08  3:20 UTC (permalink / raw)
  To: Maciej S. Szmigiero; +Cc: Paolo Bonzini, linux-kernel, kvm, Andrew Jones



> -----Original Message-----
> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> Sent: Saturday, June 5, 2021 12:49 AM
> To: Duan, Zhenzhong <zhenzhong.duan@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; Andrew Jones <drjones@redhat.com>
> Subject: selftests: kvm: allocating extra mem in slot 0 (Was: Re: [PATCH]
> selftests: kvm: fix overlapping addresses in memslot_perf_test)
> 
> On 04.06.2021 05:35, Duan, Zhenzhong wrote:
> >> -----Original Message-----
> >> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> >> Sent: Thursday, June 3, 2021 9:06 PM
> >> To: Andrew Jones <drjones@redhat.com>
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>;
> >> linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Duan, Zhenzhong
> >> <zhenzhong.duan@intel.com>
> >> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
> >> memslot_perf_test
> >>
> >> On 03.06.2021 14:37, Andrew Jones wrote:
> >>> On Thu, Jun 03, 2021 at 05:26:33AM +0000, Duan, Zhenzhong wrote:
> >>>>> -----Original Message-----
> >>>>> From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> >>>>> Sent: Thursday, June 3, 2021 7:07 AM
> >>>>> To: Paolo Bonzini <pbonzini@redhat.com>; Duan, Zhenzhong
> >>>>> <zhenzhong.duan@intel.com>
> >>>>> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew
> >>>>> Jones <drjones@redhat.com>
> >>>>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
> >>>>> memslot_perf_test
> >>>>>
> >>>>> On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
> >>>>>> On 29.05.2021 12:20, Paolo Bonzini wrote:
> >>>>>>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
> >>>>>>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
> >>>>>>>>> The memory that is allocated in vm_create is already mapped
> >>>>>>>>> close to GPA 0, because test_execute passes the requested
> >>>>>>>>> memory to prepare_vm.  This causes overlapping memory
> regions
> >>>>>>>>> and the test crashes.  For simplicity just move MEM_GPA higher.
> >>>>>>>>>
> >>>>>>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >>>>>>>>
> >>>>>>>> I am not sure that I understand the issue correctly, is
> >>>>>>>> vm_create_default() already reserving low GPAs (around
> >>>>>>>> 0x10000000) on some arches or run environments?
> >>>>>>>
> >>>>>>> It maps the number of pages you pass in the second argument, see
> >>>>>>> vm_create.
> >>>>>>>
> >>>>>>>      if (phy_pages != 0)
> >>>>>>>        vm_userspace_mem_region_add(vm,
> >> VM_MEM_SRC_ANONYMOUS,
> >>>>>>>                                    0, 0, phy_pages, 0);
> >>>>>>>
> >>>>>>> In this case:
> >>>>>>>
> >>>>>>>      data->vm = vm_create_default(VCPU_ID, mempages,
> >>>>>>> guest_code);
> >>>>>>>
> >>>>>>> called here:
> >>>>>>>
> >>>>>>>      if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
> >>>>>>>                      mem_size, slot_runtime)) {
> >>>>>>>
> >>>>>>> where mempages is mem_size, which is declared as:
> >>>>>>>
> >>>>>>>            uint64_t mem_size = tdata->mem_size ? :
> >>>>>>> MEM_SIZE_PAGES;
> >>>>>>>
> >>>>>>> but actually a better fix is just to pass a small fixed value (e.g.
> >>>>>>> 1024) to vm_create_default, since all other regions are added by
> >>>>>>> hand
> >>>>>>
> >>>>>> Yes, but the argument that is passed to vm_create_default()
> >>>>>> (mem_size in the case of the test) is not passed as phy_pages to
> >> vm_create().
> >>>>>> Rather, vm_create_with_vcpus() calculates some upper bound of
> >>>>>> extra memory that is needed to cover that much guest memory
> >>>>>> (including for its page tables).
> >>>>>>
> >>>>>> The biggest possible mem_size from memslot_perf_test is 512 MiB +
> >>>>>> 1 page, according to my calculations this results in phy_pages of
> >>>>>> 1029
> >>>>>> (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x
> >>>>>> case (here I am not sure about the exact number, since s390x has
> >>>>>> some additional alignment requirements).
> >>>>>>
> >>>>>> Both values are well below 256 MiB (0x10000000UL), so I was
> >>>>>> wondering what kind of circumstances can make these allocations
> >>>>>> collide (maybe I am missing something in my analysis).
> >>>>>
> >>>>> I see now that there has been a patch merged last week called
> >>>>> "selftests: kvm: make allocation of extra memory take effect" by
> >>>>> Zhenzhong that now allocates also the whole memory size passed to
> >>>>> vm_create_default() (instead of just page tables for that much
> memory).
> >>>>>
> >>>>> The commit message of this patch says that "perf_test_util and
> >>>>> kvm_page_table_test use it to alloc extra memory currently",
> >>>>> however both kvm_page_table_test and lib/perf_test_util framework
> >>>>> explicitly add the required memory allocation by doing a
> >>>>> vm_userspace_mem_region_add() call for the same memory size that
> >> they pass to vm_create_default().
> >>>>>
> >>>>> So now they allocate this memory twice.
> >>>>>
> >>>>> @Zhenzhong: did you notice improper operation of either
> >>>>> kvm_page_table_test or perf_test_util-based tests
> >>>>> (demand_paging_test, dirty_log_perf_test,
> >>>>> memslot_modification_stress_test) before your patch?
> >>>> No
> >>>>
> >>>>>
> >>>>> They seem to work fine for me without the patch (and I guess other
> >>>>> people would have noticed earlier, too, if they were broken).
> >>>>>
> >>>>> After this patch not only these tests allocate their memory twice
> >>>>> but it is harder to make vm_create_default() allocate the right
> >>>>> amount of memory for the page tables in cases where the test needs
> >>>>> to explicitly use
> >>>>> vm_userspace_mem_region_add() for its allocations (because it
> >>>>> wants the allocation placed at a specific GPA or in a specific memslot).
> >>>>>
> >>>>> One has to basically open-code the page table size calculations
> >>>>> from
> >>>>> vm_create_with_vcpus() in the particular test then, taking also
> >>>>> into account that vm_create_with_vcpus() will not only allocate
> >>>>> the passed memory size (calculated page tables size) but also
> >>>>> behave like it was allocating space for page tables for these page
> >>>>> tables (even though the passed memory size itself is supposed to
> cover them).
> >>>> Looks we have different understanding to the parameter
> >> extra_mem_pages of vm_create_default().
> >>>>
> >>>> In your usage, extra_mem_pages is only used for page table
> >>>> calculations, real extra memory allocation happens in the extra
> >>>> call of
> >> vm_userspace_mem_region_add().
> >>>
> >>> Yes, this is the meaning that kvm selftests has always had for
> >>> extra_mem_pages of vm_create_default(). If we'd rather have a
> >>> different meaning, that's fine, but we need to change all the
> >>> callers of the function as well.
> >>
> >> If we change the meaning of extra_mem_pages (keep the patch) it would
> >> be good to still have an additional parameter to
> >> vm_create_with_vcpus() for tests that have to allocate their memory
> >> on their own via
> >> vm_userspace_mem_region_add() for vm_create_with_vcpus() to just
> >> allocate the page tables for these manual allocations.
> >> Or a helper to calculate the required extra_mem_pages for them.
> >>
> >>> If we decide to leave vm_create_default() the way it was by
> >>> reverting this patch, then maybe we should consider renaming the
> >>> parameter and/or documenting the function.
> >>
> >> Adding a descriptive comment (and possibly renaming the parameter)
> >> seems like a much simpler solution to me that adapting these tests
> >> (and possibly adding the parameter or helper described above for them).
> >
> > Agree, I prefer the simpler way.
> >
> > I also think of an idea for custom slot0 memory, keep extra_mem_pages
> the original way, adding a global slot0_pages for custom slot0 memory.
> Maybe not a good choice as it's not thread safe, just for discussion. That is:
> > 1. revert "selftests: kvm: make allocation of extra memory take effect"
> > 2. add below patch
> > --- a/tools/testing/selftests/kvm/include/kvm_util.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> > @@ -280,6 +280,9 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm
> *vm, size_t num,
> >   struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t
> extra_mem_pages,
> >                                   void *guest_code);
> >
> > +struct kvm_vm *vm_create_slot0(uint32_t vcpuid, uint64_t
> slot0_mem_pages,
> > +                              uint64_t extra_mem_pages, void
> > +*guest_code);
> > +
> >   /* Same as vm_create_default, but can be used for more than one vcpu */
> >   struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus,
> uint64_t extra_mem_pages,
> >                                              uint32_t
> > num_percpu_pages, void *guest_code, diff --git
> > a/tools/testing/selftests/kvm/lib/kvm_util.c
> > b/tools/testing/selftests/kvm/lib/kvm_util.c
> > index 63418df921f0..56b1225865d5 100644
> > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > @@ -196,6 +196,7 @@ const struct vm_guest_mode_params
> vm_guest_mode_params[] = {
> >   _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct
> vm_guest_mode_params) == NUM_VM_MODES,
> >                 "Missing new mode params?");
> >
> > +uint64_t slot0_pages = DEFAULT_GUEST_PHY_PAGES;
> >   /*
> >    * VM Create
> >    *
> > @@ -319,8 +320,8 @@ struct kvm_vm *vm_create_with_vcpus(enum
> vm_guest_mode mode, uint32_t nr_vcpus,
> >           * than N/x*2.
> >           */
> >          uint64_t vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) *
> nr_vcpus;
> > -       uint64_t extra_pg_pages = (extra_mem_pages + vcpu_pages) /
> PTES_PER_MIN_PAGE * 2;
> > -       uint64_t pages = DEFAULT_GUEST_PHY_PAGES + vcpu_pages +
> extra_pg_pages;
> > +       uint64_t extra_pg_pages = (slot0_pages + extra_mem_pages +
> vcpu_pages) / PTES_PER_MIN_PAGE * 2;
> > +       uint64_t pages = slot0_pages + vcpu_pages + extra_pg_pages;
> >          struct kvm_vm *vm;
> >          int i;
> >
> > @@ -358,9 +359,18 @@ struct kvm_vm
> *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_me
> >                                      num_percpu_pages, guest_code, vcpuids);
> >   }
> >
> > +struct kvm_vm *vm_create_slot0(uint32_t vcpuid, uint64_t
> slot0_mem_pages,
> > +                                      uint64_t extra_mem_pages, void
> > +*guest_code) {
> > +       slot0_pages = slot0_mem_pages;
> > +       return vm_create_default_with_vcpus(1, extra_mem_pages, 0,
> guest_code,
> > +                                           (uint32_t []){ vcpuid });
> > +}
> > +
> >   struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t
> extra_mem_pages,
> >                                   void *guest_code)
> >   {
> > +       slot0_pages = DEFAULT_GUEST_PHY_PAGES;
> >          return vm_create_default_with_vcpus(1, extra_mem_pages, 0,
> guest_code,
> >                                              (uint32_t []){ vcpuid });
> >   }
> > @@ -626,6 +636,9 @@ void kvm_vm_free(struct kvm_vm *vmp)
> >
> >          /* Free the structure describing the VM. */
> >          free(vmp);
> > +
> > +       /* Restore slot0 memory to default size for next VM creation */
> > +       slot0_pages = DEFAULT_GUEST_PHY_PAGES;
> >   }
> >
> >   /*
> 
> In terms of thread safety a quick glance at current tests seems to suggest that
> none of them create VMs from anything but their main threads (although
> s90x diag318 handler for sync_regs_test does some suspicious stuff).
> 
> But I think a better solution than adding a global variable as an implicit
> parameter to vm_create_with_vcpus() is to simply add an extra explicit
> parameter to this function - it has just 3 callers that need to be
> (trivially) adapted then.

So we don't provide custom slot0 memory size support in vm_create_default() but vm_create_with_vcpus() as it has only 3 callers,  that's a good idea, and I see there is
no test requiring custom slot0 support until now. Let me work out a small patchset to do all the suggested changes in this mail thread.

Regards
Zhenzhong


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-06-08  3:20 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-28 19:11 [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test Paolo Bonzini
2021-05-28 19:51 ` Maciej S. Szmigiero
2021-05-29 10:20   ` Paolo Bonzini
2021-05-29 23:13     ` Maciej S. Szmigiero
2021-06-02 23:07       ` Maciej S. Szmigiero
2021-06-03  5:26         ` Duan, Zhenzhong
2021-06-03 12:37           ` Andrew Jones
2021-06-03 13:05             ` Maciej S. Szmigiero
2021-06-04  3:35               ` Duan, Zhenzhong
2021-06-04 16:49                 ` selftests: kvm: allocating extra mem in slot 0 (Was: Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test) Maciej S. Szmigiero
2021-06-08  3:20                   ` Duan, Zhenzhong
2021-06-03 13:05           ` [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test Maciej S. Szmigiero

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.