From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Jones Subject: [kvm-unit-tests PATCH v2 6/6] lib/x86/vm: enable malloc and friends Date: Wed, 2 Nov 2016 21:52:46 +0100 Message-ID: <1478119966-13252-7-git-send-email-drjones@redhat.com> References: <1478119966-13252-1-git-send-email-drjones@redhat.com> Cc: pbonzini@redhat.com, lvivier@redhat.com, thuth@redhat.com To: kvm@vger.kernel.org Return-path: Received: from mx1.redhat.com ([209.132.183.28]:39888 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757244AbcKBUw7 (ORCPT ); Wed, 2 Nov 2016 16:52:59 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2ED103D94F for ; Wed, 2 Nov 2016 20:52:59 +0000 (UTC) In-Reply-To: <1478119966-13252-1-git-send-email-drjones@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: We've had malloc, calloc, free, and memalign available for quite a while, and arm and powerpc make use of them. x86 hasn't yet, which is fine, but could lead to problems if common lib code wants to make use of them. While arm and powerpc currently use early_alloc_ops, built on phys_alloc, x86 does not initialize phys_alloc and has its own memory management, vmalloc. This patch enables vmalloc to be used as the underlying allocator, but there are two drawbacks: 1) Every allocation will allocate at least one page. 2) It can only be used with the MMU enabled. Drawback (1) is probably fine for kvm-unit-tests. Drawback (2) may not be, as common lib code may get invoked at any time by any unit test, including ones that don't have the MMU enabled. However, as we only switch alloc_ops to the vmalloc based implementations in setup_vm, where the MMU is enabled, then they'll be safe to use for any unit test that invokes setup_vm first. If the unit test does not invoke setup_vm first then alloc_ops will use the default implementations, the ones based on phys_alloc, and will result in asserts firing if phys_alloc_init has not be called first. While this patch may not enable anything right now, I think it makes sense to enable these alloc ops before they're needed, because then everything will most likely just work when a future common lib function that uses malloc is introduced. If that common function results in a unit test hitting an assert, then the test writer will just need to decide if they want to use phys_alloc or vmalloc and call the appropriate init function first. Signed-off-by: Andrew Jones --- lib/x86/vm.c | 49 ++++++++++++++++++++++++++++++++++++++++++------- lib/x86/vm.h | 2 +- 2 files changed, 43 insertions(+), 8 deletions(-) diff --git a/lib/x86/vm.c b/lib/x86/vm.c index 8b95104ef80f..955c9b8afea5 100644 --- a/lib/x86/vm.c +++ b/lib/x86/vm.c @@ -120,20 +120,52 @@ static void setup_mmu(unsigned long len) printf("cr4 = %lx\n", read_cr4()); } +static void *vcalloc(size_t nmemb, size_t size) +{ + void *addr = vmalloc(nmemb * size); + memset(addr, 0, nmemb * size); + return addr; +} + +static void *vmemalign(size_t alignment, size_t size) +{ + void *base, *addr; + size_t offset; + + assert(alignment && !(alignment & (alignment - 1))); + + alignment = ALIGN(alignment, sizeof(size_t)); + base = vmalloc(size + alignment); + offset = ((size_t *)base)[-1]; + + addr = (void *)ALIGN((ulong)base, alignment); + ((size_t *)addr)[-1] = offset + (addr - base); + return addr; +} + +static struct alloc_ops vm_alloc_ops = { + .malloc = vmalloc, + .calloc = vcalloc, + .free = vfree, + .memalign = vmemalign, +}; + void setup_vm() { assert(!end_of_memory); end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE); heap_init(&edata, end_of_memory - (unsigned long)&edata); setup_mmu(end_of_memory); + alloc_ops = &vm_alloc_ops; } -void *vmalloc(unsigned long size) +void *vmalloc(size_t size) { void *mem, *p; unsigned pages; + size_t offset = sizeof(size_t) * 2; - size = PAGE_ALIGN(size + sizeof(unsigned long)); + size = PAGE_ALIGN(size + offset); spin_lock(&vm_lock); vfree_top -= size; @@ -145,8 +177,11 @@ void *vmalloc(unsigned long size) install_page(phys_to_virt(read_cr3()), virt_to_phys(alloc_page()), p); p += PAGE_SIZE; } - *(unsigned long *)mem = size; - mem += sizeof(unsigned long); + + ((size_t *)mem)[0] = size; + ((size_t *)mem)[1] = offset; + mem += offset; + return mem; } @@ -157,13 +192,13 @@ uint64_t virt_to_phys_cr3(void *mem) void vfree(void *mem) { - unsigned long size; + size_t size; if (mem == NULL) return; - mem -= sizeof(unsigned long); - size = *(unsigned long *)mem; + mem -= ((size_t *)mem)[-1]; /* offset */ + size = *(size_t *)mem; while (size) { free_page(phys_to_virt(*get_pte(phys_to_virt(read_cr3()), mem) & PT_ADDR_MASK)); diff --git a/lib/x86/vm.h b/lib/x86/vm.h index 6a4384f5a48d..3c9a71d03cf9 100644 --- a/lib/x86/vm.h +++ b/lib/x86/vm.h @@ -7,7 +7,7 @@ void setup_vm(); -void *vmalloc(unsigned long size); +void *vmalloc(size_t size); void vfree(void *mem); void *vmap(unsigned long long phys, unsigned long size); void *alloc_vpage(void); -- 2.7.4