From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: [PATCH] malloc: respect SIZE_HINT_ONLY when looking for the biggest free elem Date: Sun, 28 Oct 2018 11:50:52 +0100 Message-ID: <2067770.0L7xP8buvR@xps> References: <20181007193147.123868-1-dariusz.stojaczyk@intel.com> <522e80af-e19e-2e5c-f1f1-6fd34075cf76@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: dev@dpdk.org, "Burakov, Anatoly" , stable@dpdk.org To: Darek Stojaczyk Return-path: In-Reply-To: <522e80af-e19e-2e5c-f1f1-6fd34075cf76@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 08/10/2018 11:02, Burakov, Anatoly: > On 07-Oct-18 8:31 PM, Darek Stojaczyk wrote: > > RTE_MEMZONE_SIZE_HINT_ONLY wasn't checked in any way, > > causing size hints to be parsed as hard requirements. > > This resulted in some allocations being failed prematurely. > > > > Fixes: 68b6092bd3c7 ("malloc: allow reserving biggest element") > > Cc: anatoly.burakov@intel.com > > Cc: stable@dpdk.org > > > > Signed-off-by: Darek Stojaczyk > > --- > > lib/librte_eal/common/malloc_heap.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c > > index ac7bbb3ba..d2a8bd8dc 100644 > > --- a/lib/librte_eal/common/malloc_heap.c > > +++ b/lib/librte_eal/common/malloc_heap.c > > @@ -165,7 +165,9 @@ find_biggest_element(struct malloc_heap *heap, size_t *size, > > for (elem = LIST_FIRST(&heap->free_head[idx]); > > !!elem; elem = LIST_NEXT(elem, free_list)) { > > size_t cur_size; > > - if (!check_hugepage_sz(flags, elem->msl->page_sz)) > > + if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY) == 0 && > > + !check_hugepage_sz(flags, > > + elem->msl->page_sz)) > > continue; > > Reviewed-by: Anatoly Burakov > > Although to be frank, the whole concept of "reserving biggest available > memzone" is currently broken because of dynamic memory allocation. There > is currently no way to allocate "as many hugepages as you can", so we're > only looking at memory already allocated, which in the general case is > less than page size long (unless you use legacy mode or memory > preallocation switches). Applied anyway, thanks