From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Jones Subject: Re: [kvm-unit-tests PATCH 1/4] lib/alloc: improve robustness Date: Wed, 2 Nov 2016 11:24:40 +0100 Message-ID: <20161102102440.wyurwg2nep4zsjqe@hawk.localdomain> References: <1477939906-28802-1-git-send-email-drjones@redhat.com> <1477939906-28802-2-git-send-email-drjones@redhat.com> <986a24ad-af96-5940-9268-4ce0560d8981@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, pbonzini@redhat.com, lvivier@redhat.com To: Thomas Huth Return-path: Received: from mx1.redhat.com ([209.132.183.28]:55096 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751130AbcKBKYo (ORCPT ); Wed, 2 Nov 2016 06:24:44 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5743F3D94F for ; Wed, 2 Nov 2016 10:24:44 +0000 (UTC) Content-Disposition: inline In-Reply-To: <986a24ad-af96-5940-9268-4ce0560d8981@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Nov 02, 2016 at 07:08:51AM +0100, Thomas Huth wrote: > On 31.10.2016 19:51, Andrew Jones wrote: > > Add some asserts to make sure phys_alloc_init is called only once > > and before any other alloc calls. > > > > Signed-off-by: Andrew Jones > > --- > > lib/alloc.c | 17 +++++++++-------- > > 1 file changed, 9 insertions(+), 8 deletions(-) > > > > diff --git a/lib/alloc.c b/lib/alloc.c > > index e1d7b8a380ff..a83c8c5db850 100644 > > --- a/lib/alloc.c > > +++ b/lib/alloc.c > > @@ -42,12 +42,13 @@ void phys_alloc_show(void) > > > > void phys_alloc_init(phys_addr_t base_addr, phys_addr_t size) > > { > > - spin_lock(&lock); > > + /* we should only be called once by one processor */ > > + assert(!top); > > + assert(size); > > If you remove the spinlock, isn't there a potential race between the > assert(!top) and the "top = ..." below? E.g. if two CPUs enter this > function at the same time, both could check and passs assert(!top) > independently, and then do the "top = ..." in parallel? Good point. Adding the asserts is enforcing callers to do the right thing, so we might as well keep the spinlocks to fully enforce the right thing, rather than assume callers will mostly do it. I'll also update the documentation in alloc.h for v2, though, stating that this should only be called once from the primary cpu, before secondaries are started, ensuring callers should know what the right thing is. > > > base = base_addr; > > top = base + size; > > align_min = DEFAULT_MINIMUM_ALIGNMENT; > > - nr_regions = 0; > > - spin_unlock(&lock); > > } > > > > void phys_alloc_set_minimum_alignment(phys_addr_t align) > > @@ -63,14 +64,14 @@ static phys_addr_t phys_alloc_aligned_safe(phys_addr_t size, > > { > > static bool warned = false; > > phys_addr_t addr, size_orig = size; > > - u64 top_safe; > > + u64 top_safe = top; > > > > - spin_lock(&lock); > > + if (safe && sizeof(long) == 4) > > + top_safe = MIN(top, 1ULL << 32); > > > > - top_safe = top; > > + assert(base < top_safe); > > > > - if (safe && sizeof(long) == 4) > > - top_safe = MIN(top_safe, 1ULL << 32); > > + spin_lock(&lock); > > > > align = MAX(align, align_min); > > This hunk rather looks like a code re-arrangement to me which is not > directly related to the patch description - maybe you should put this > into a separate patch? Much of it was re-arranging, but as a side-effect to getting the 'assert(base < top_safe)' added in the right place. That assert ensures the subtraction below is safe and also that base != top_safe, which would likely mean base == top_safe == 0, i.e. not yet initialized. Hmm, it should probably be assert(top_safe && base < top_safe) though to account for the no free memory left case as well. Thanks, drew