Linux-kselftest Archive on lore.kernel.org
 help / color / Atom feed
From: Andrew Jones <drjones@redhat.com>
To: Ben Gardon <bgardon@google.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	linux-kselftest@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Cannon Matthews <cannonmatthews@google.com>,
	Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create
Date: Wed, 8 Jan 2020 15:07:47 +0100
Message-ID: <20200108140747.ajglffa7oz5e3b3e@kamzik.brq.redhat.com> (raw)
In-Reply-To: <CANgfPd9VxvJYAw_cqG9X2GUAkZ9vumF8mZ1+P==mJoZgShR_rg@mail.gmail.com>

On Tue, Jan 07, 2020 at 01:20:53PM -0800, Ben Gardon wrote:
> Would it be viable to allocate at 4G be default and then add another
> interface for allocations at low memory addresses? For most tests, I
> don't think there's any value to having the backing paddrs below 3G.

Please don't top post. Replies should go under the comments and questions
in order to more easily read the thread.

Anyway, this sounds reasonable to me, but we'll need to test all tests
on all architectures, as there could be some assumptions broken with
a change like that.

Thanks,
drew

> 
> On Tue, Jan 7, 2020 at 7:42 AM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Mon, Dec 16, 2019 at 01:39:01PM -0800, Ben Gardon wrote:
> > > KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> > > vCPU creation. If memslot 0 is large enough it collides with these
> > > memslots an causes vCPU creation to fail. When requesting more than 3G,
> > > start memslot 0 at 4G in _vm_create.
> > >
> > > Signed-off-by: Ben Gardon <bgardon@google.com>
> > > ---
> > >  tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++++----
> > >  1 file changed, 11 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > > index 41cf45416060f..886d58e6cac39 100644
> > > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > > @@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = {
> > >  _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
> > >              "Missing new mode strings?");
> > >
> > > +#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30)
> > > +#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30)
> > >  /*
> > >   * VM Create
> > >   *
> > > @@ -128,13 +130,16 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
> > >   *
> > >   * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
> > >   * When phy_pages is non-zero, a memory region of phy_pages physical pages
> > > - * is created and mapped starting at guest physical address 0.  The file
> > > - * descriptor to control the created VM is created with the permissions
> > > - * given by perm (e.g. O_RDWR).
> > > + * is created. If phy_pages is less that 3G, it is mapped starting at guest
> > > + * physical address 0. If phy_pages is greater than 3G it is mapped starting
> > > + * 4G into the guest physical address space to avoid KVM internal memslots
> > > + * which map the region between 3G and 4G. The file descriptor to control the
> > > + * created VM is created with the permissions given by perm (e.g. O_RDWR).
> > >   */
> > >  struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> > >  {
> > >       struct kvm_vm *vm;
> > > +     uint64_t guest_paddr = 0;
> > >
> > >       DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
> > >
> > > @@ -227,9 +232,11 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> > >
> > >       /* Allocate and setup memory for guest. */
> > >       vm->vpages_mapped = sparsebit_alloc();
> > > +     if (guest_paddr + phy_pages > KVM_INTERNAL_MEMSLOTS_START_PADDR)
> > > +             guest_paddr = KVM_INTERNAL_MEMSLOTS_END_PADDR;
> > >       if (phy_pages != 0)
> > >               vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> > > -                                         0, 0, phy_pages, 0);
> > > +                                         guest_paddr, 0, phy_pages, 0);
> > >
> > >       return vm;
> > >  }
> > > --
> > > 2.24.1.735.g03f4e72817-goog
> > >
> >
> > I feel like this function is becoming too magic and it'll be more
> > complicated for tests that need to add additional memory regions
> > to know what physical addresses are available. Maybe we should assert
> > if we can't allocate more than 3G at offset zero and also provide
> > another interface for allocating at an offset input by the user,
> > as long as the offset is 4G or above (asserting when it isn't)?
> >
> > Thanks,
> > drew
> >
> 


  reply index

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
2019-12-16 21:38 ` [PATCH v3 1/8] KVM: selftests: Create a " Ben Gardon
2020-01-07 14:33   ` Peter Xu
2020-01-07 14:56     ` Andrew Jones
2020-01-07 18:41       ` Ben Gardon
2020-01-08 13:45         ` Andrew Jones
2019-12-16 21:38 ` [PATCH v3 2/8] KVM: selftests: Add demand paging content to the " Ben Gardon
2020-01-07 16:00   ` Peter Xu
2020-01-07 21:13     ` Ben Gardon
2019-12-16 21:38 ` [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay Ben Gardon
2020-01-07 16:04   ` Peter Xu
2019-12-16 21:38 ` [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test Ben Gardon
2020-01-07 15:02   ` Andrew Jones
2020-01-07 21:18     ` Ben Gardon
2019-12-16 21:38 ` [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
2020-01-07 15:23   ` Andrew Jones
2020-01-07 18:26     ` Ben Gardon
2020-01-08 13:59       ` Andrew Jones
2019-12-16 21:38 ` [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
2020-01-07 15:27   ` Andrew Jones
2019-12-16 21:39 ` [PATCH v3 7/8] KVM: selftests: Time guest demand paging Ben Gardon
2019-12-16 21:39 ` [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create Ben Gardon
2020-01-07 15:42   ` Andrew Jones
2020-01-07 21:20     ` Ben Gardon
2020-01-08 14:07       ` Andrew Jones [this message]
2020-01-06 22:46 ` [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200108140747.ajglffa7oz5e3b3e@kamzik.brq.redhat.com \
    --to=drjones@redhat.com \
    --cc=bgardon@google.com \
    --cc=cannonmatthews@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-kselftest Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-kselftest/0 linux-kselftest/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-kselftest linux-kselftest/ https://lore.kernel.org/linux-kselftest \
		linux-kselftest@vger.kernel.org
	public-inbox-index linux-kselftest

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kselftest


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git