From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B05DECA9EC3 for ; Thu, 31 Oct 2019 08:46:01 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 0A3942086D for ; Thu, 31 Oct 2019 08:46:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=6wind.com header.i=@6wind.com header.b="H1qs+6uT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A3942086D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=6wind.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B7A951C1FC; Thu, 31 Oct 2019 09:45:59 +0100 (CET) Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by dpdk.org (Postfix) with ESMTP id B55691C1F9 for ; Thu, 31 Oct 2019 09:45:57 +0100 (CET) Received: by mail-wr1-f66.google.com with SMTP id a11so5247666wra.6 for ; Thu, 31 Oct 2019 01:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=EOYyC5UD9wByQsxvCb36kkJyldiulCF373Xw3jj5+oE=; b=H1qs+6uTmlhPewIrtwso1jrdBL8C2SC42ATz5vJTW3jYdqBBSrfNQvJutrS6H1+XAR vslGILpxVUVY+ErG9DGePbCSr3HXG+KSsONLc7XWmqasUjRoogf11VTV9ASl1gqCGulP siuNwQro6Qo4ei315mF3N7gyQ68u7pWAZoBRmZsFO7YlajpTc7szETdSv08x9URcuQIk b89BTKFuwU0BX1fLHlGqDhVUrHUjfi1+2Fh+8iUJzmXGTUlAeLNkp+JuB1npnRNEd9qm AhXjDSZ9gNKP2lpl4o2+6PxEsVWmaP8kV4NPgQlbS1PiDHB5tYibuqSv/JRLuh7D3nDh xUYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=EOYyC5UD9wByQsxvCb36kkJyldiulCF373Xw3jj5+oE=; b=ALIJPFCDnM0DfacUtpu/fvPKFf/4hcYNLPcNljlWSObHaMrgF0T338u6hBCw3s8MO5 jd+hx4BtIONjYevMiE9WJmYk4K+LT9HU+7/Zu85ADNhgrxAxsm8V8W+GzhyjNl0m22m9 X5gRgYV/ms+W9NEmnwwQfuUn0jRw+VYZRBQSywz24NQkgItTii6rrvyZ3ax9pVQ1mVYb qmGJ7Afzfkw85iMcBcoK2YD1Zxdne5usH2yKVb3RONKMjZBIki+O5joxcitcFZw8wg2u f/mcHh4/6mF1Pt54qcHDlYBx7/iA712k7gB4L55JiS0JU3gBrFz0+fZlnosctS8Z1k5N FQeA== X-Gm-Message-State: APjAAAWiGWUYLDxC7EiDczhMdpaBA0Ch3kibCBYhtpZh+/8SCPiGmZi9 NwNVxb8tQ6acp4PxDBU9l0Raow== X-Google-Smtp-Source: APXvYqyPyi/kLlvA1OFk6Nu+Q6fKW2IXIOp44MtN6SogmM9/9EwMFBTkGF3hgJHH1s5Ht5lbNVTjlg== X-Received: by 2002:a5d:4a03:: with SMTP id m3mr4427599wrq.359.1572511557273; Thu, 31 Oct 2019 01:45:57 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id z12sm2912174wmi.4.2019.10.31.01.45.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 01:45:56 -0700 (PDT) Date: Thu, 31 Oct 2019 09:45:55 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: Vamsi Krishna Attunuru , "dev@dpdk.org" , Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Message-ID: <20191031084555.lii7uksuhlyyha6s@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191030143619.4007-1-olivier.matz@6wind.com> <20191030143619.4007-6-olivier.matz@6wind.com> <20191031082436.p564u2aznvbl2l3m@platinum> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [EXT] [PATCH v2 5/6] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Oct 31, 2019 at 11:33:30AM +0300, Andrew Rybchenko wrote: > On 10/31/19 11:24 AM, Olivier Matz wrote: > > Hi, > > > > On Thu, Oct 31, 2019 at 06:54:50AM +0000, Vamsi Krishna Attunuru wrote: > > > Hi Olivier, > > > > > > Thanks for reworked patches. > > > With V2, Tests with 512MB & 2M page sizes work fine with octeontx2 > > > mempool pmd. > > Good to hear. > > > > > One more concern is, octeontx fpa mempool driver also has the similar > > > requirements. How do we address that, Can you suggest the best way to > > > avoid code duplication in PMDs. > > Well, we could provide additional helpers in librte_mempool: > > rte_mempool_calc_mem_size_helper() and rte_mempool_populate_helper() > > that would be internal to mempool lib + drivers. This helpers could take > > an additional parameter to enable the alignemnt on objects. > > > > But: with this approach, we are moving back driver specificities inside > > the common code, and if tomorrow more drivers with different > > requirements also asks to factorize them in common code, we may end up > > with common functions that must support hardware specificities, making > > them harder to maintain. > > > > I agree that duplicating code is not satisfying either, but I think it > > is still better than starting to move hw requirements in common code, > > given that mempool drivers where introduced for that reason. > > > > @Andrew, any opinion? > > I think I'd prefer to have helper function. It avoids HW specific > flags in external interface (which was discussed initially), but also > avoids code duplication which makes it easier to maintain. > May be we will reconsider it in the future, but right now I think > it is the best option. OK, let me try to propose a v3 with helpers like this, and we can discuss on this basis. > > > > Regards > > > A Vamsi > > > > > > > -----Original Message----- > > > > From: Olivier Matz > > > > Sent: Wednesday, October 30, 2019 8:06 PM > > > > To: dev@dpdk.org > > > > Cc: Anatoly Burakov ; Andrew Rybchenko > > > > ; Ferruh Yigit ; > > > > Giridharan, Ganesan ; Jerin Jacob Kollanukkaran > > > > ; Kiran Kumar Kokkilagadda ; > > > > Stephen Hemminger ; Thomas Monjalon > > > > ; Vamsi Krishna Attunuru > > > > Subject: [EXT] [PATCH v2 5/6] mempool: prevent objects from being across > > > > pages > > > > > > > > External Email > > > > > > > > ---------------------------------------------------------------------- > > > > When populating a mempool, ensure that objects are not located across several > > > > pages, except if user did not request iova contiguous objects. > > > > > > > > Signed-off-by: Vamsi Krishna Attunuru > > > > Signed-off-by: Olivier Matz > > > > --- > > > > drivers/mempool/octeontx2/Makefile | 3 + > > > > drivers/mempool/octeontx2/meson.build | 3 + > > > > drivers/mempool/octeontx2/otx2_mempool_ops.c | 119 ++++++++++++++++--- > > > > lib/librte_mempool/rte_mempool.c | 23 ++-- > > > > lib/librte_mempool/rte_mempool_ops_default.c | 32 ++++- > > > > 5 files changed, 147 insertions(+), 33 deletions(-) > > > > > > > > diff --git a/drivers/mempool/octeontx2/Makefile > > > > b/drivers/mempool/octeontx2/Makefile > > > > index 87cce22c6..d781cbfc6 100644 > > > > --- a/drivers/mempool/octeontx2/Makefile > > > > +++ b/drivers/mempool/octeontx2/Makefile > > > > @@ -27,6 +27,9 @@ EXPORT_MAP := rte_mempool_octeontx2_version.map > > > > > > > > LIBABIVER := 1 > > > > > > > > +# for rte_mempool_get_page_size > > > > +CFLAGS += -DALLOW_EXPERIMENTAL_API > > > > + > > > > # > > > > # all source are stored in SRCS-y > > > > # > > > > diff --git a/drivers/mempool/octeontx2/meson.build > > > > b/drivers/mempool/octeontx2/meson.build > > > > index 9fde40f0e..28f9634da 100644 > > > > --- a/drivers/mempool/octeontx2/meson.build > > > > +++ b/drivers/mempool/octeontx2/meson.build > > > > @@ -21,3 +21,6 @@ foreach flag: extra_flags endforeach > > > > > > > > deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool'] > > > > + > > > > +# for rte_mempool_get_page_size > > > > +allow_experimental_apis = true > > > > diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > > b/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > > index d769575f4..47117aec6 100644 > > > > --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > > +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > > @@ -713,12 +713,76 @@ static ssize_t > > > > otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, > > > > uint32_t pg_shift, size_t *min_chunk_size, size_t *align) { > > > > - /* > > > > - * Simply need space for one more object to be able to > > > > - * fulfill alignment requirements. > > > > - */ > > > > - return rte_mempool_op_calc_mem_size_default(mp, obj_num + 1, > > > > pg_shift, > > > > - min_chunk_size, align); > > > > + size_t total_elt_sz; > > > > + size_t obj_per_page, pg_sz, objs_in_last_page; > > > > + size_t mem_size; > > > > + > > > > + /* derived from rte_mempool_op_calc_mem_size_default() */ > > > > + > > > > + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > > + > > > > + if (total_elt_sz == 0) { > > > > + mem_size = 0; > > > > + } else if (pg_shift == 0) { > > > > + /* one object margin to fix alignment */ > > > > + mem_size = total_elt_sz * (obj_num + 1); > > > > + } else { > > > > + pg_sz = (size_t)1 << pg_shift; > > > > + obj_per_page = pg_sz / total_elt_sz; > > > > + > > > > + /* we need to keep one object to fix alignment */ > > > > + if (obj_per_page > 0) > > > > + obj_per_page--; > > > > + > > > > + if (obj_per_page == 0) { > > > > + /* > > > > + * Note that if object size is bigger than page size, > > > > + * then it is assumed that pages are grouped in subsets > > > > + * of physically continuous pages big enough to store > > > > + * at least one object. > > > > + */ > > > > + mem_size = RTE_ALIGN_CEIL(2 * total_elt_sz, > > > > + pg_sz) * obj_num; > > > > + } else { > > > > + /* In the best case, the allocator will return a > > > > + * page-aligned address. For example, with 5 objs, > > > > + * the required space is as below: > > > > + * | page0 | page1 | page2 (last) | > > > > + * |obj0 |obj1 |xxx|obj2 |obj3 |xxx|obj4| > > > > + * <------------- mem_size -------------> > > > > + */ > > > > + objs_in_last_page = ((obj_num - 1) % obj_per_page) + > > > > 1; > > > > + /* room required for the last page */ > > > > + mem_size = objs_in_last_page * total_elt_sz; > > > > + /* room required for other pages */ > > > > + mem_size += ((obj_num - objs_in_last_page) / > > > > + obj_per_page) << pg_shift; > > > > + > > > > + /* In the worst case, the allocator returns a > > > > + * non-aligned pointer, wasting up to > > > > + * total_elt_sz. Add a margin for that. > > > > + */ > > > > + mem_size += total_elt_sz - 1; > > > > + } > > > > + } > > > > + > > > > + *min_chunk_size = total_elt_sz * 2; > > > > + *align = RTE_CACHE_LINE_SIZE; > > > > + > > > > + return mem_size; > > > > +} > > > > + > > > > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > > > > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > > > > + if (pg_sz == 0) > > > > + return 0; > > > > + if (elt_sz > pg_sz) > > > > + return 0; > > > > + if (RTE_PTR_ALIGN(obj, pg_sz) != RTE_PTR_ALIGN(obj + elt_sz - 1, > > > > pg_sz)) > > > > + return -1; > > > > + return 0; > > > > } > > > > > > > > static int > > > > @@ -726,8 +790,12 @@ otx2_npa_populate(struct rte_mempool *mp, > > > > unsigned int max_objs, void *vaddr, > > > > rte_iova_t iova, size_t len, > > > > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > > > > { > > > > - size_t total_elt_sz; > > > > + char *va = vaddr; > > > > + size_t total_elt_sz, pg_sz; > > > > size_t off; > > > > + unsigned int i; > > > > + void *obj; > > > > + int ret; > > > > > > > > if (iova == RTE_BAD_IOVA) > > > > return -EINVAL; > > > > @@ -735,22 +803,45 @@ otx2_npa_populate(struct rte_mempool *mp, > > > > unsigned int max_objs, void *vaddr, > > > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > > > > > > /* Align object start address to a multiple of total_elt_sz */ > > > > - off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz); > > > > + off = total_elt_sz - (((uintptr_t)(va - 1) % total_elt_sz) + 1); > > > > > > > > if (len < off) > > > > return -EINVAL; > > > > > > > > - vaddr = (char *)vaddr + off; > > > > - iova += off; > > > > - len -= off; > > > > > > > > - npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len); > > > > + npa_lf_aura_op_range_set(mp->pool_id, iova + off, iova + len - off); > > > > > > > > if (npa_lf_aura_range_update_check(mp->pool_id) < 0) > > > > return -EBUSY; > > > > > > > > - return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, > > > > len, > > > > - obj_cb, obj_cb_arg); > > > > + /* the following is derived from rte_mempool_op_populate_default() */ > > > > + > > > > + ret = rte_mempool_get_page_size(mp, &pg_sz); > > > > + if (ret < 0) > > > > + return ret; > > > > + > > > > + for (i = 0; i < max_objs; i++) { > > > > + /* avoid objects to cross page boundaries, and align > > > > + * offset to a multiple of total_elt_sz. > > > > + */ > > > > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) { > > > > + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > > > off); > > > > + off += total_elt_sz - (((uintptr_t)(va + off - 1) % > > > > + total_elt_sz) + 1); > > > > + } > > > > + > > > > + if (off + total_elt_sz > len) > > > > + break; > > > > + > > > > + off += mp->header_size; > > > > + obj = va + off; > > > > + obj_cb(mp, obj_cb_arg, obj, > > > > + (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > > > > + rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > > > > + off += mp->elt_size + mp->trailer_size; > > > > + } > > > > + > > > > + return i; > > > > } > > > > > > > > static struct rte_mempool_ops otx2_npa_ops = { diff --git > > > > a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > > > > index 758c5410b..d3db9273d 100644 > > > > --- a/lib/librte_mempool/rte_mempool.c > > > > +++ b/lib/librte_mempool/rte_mempool.c > > > > @@ -431,8 +431,6 @@ rte_mempool_get_page_size(struct rte_mempool *mp, > > > > size_t *pg_sz) > > > > > > > > if (!need_iova_contig_obj) > > > > *pg_sz = 0; > > > > - else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) > > > > - *pg_sz = 0; > > > > else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > > > > *pg_sz = get_min_page_size(mp->socket_id); > > > > else > > > > @@ -481,17 +479,15 @@ rte_mempool_populate_default(struct rte_mempool > > > > *mp) > > > > * then just set page shift and page size to 0, because the user has > > > > * indicated that there's no need to care about anything. > > > > * > > > > - * if we do need contiguous objects, there is also an option to reserve > > > > - * the entire mempool memory as one contiguous block of memory, in > > > > - * which case the page shift and alignment wouldn't matter as well. > > > > + * if we do need contiguous objects (if a mempool driver has its > > > > + * own calc_size() method returning min_chunk_size = mem_size), > > > > + * there is also an option to reserve the entire mempool memory > > > > + * as one contiguous block of memory. > > > > * > > > > * if we require contiguous objects, but not necessarily the entire > > > > - * mempool reserved space to be contiguous, then there are two > > > > options. > > > > - * > > > > - * if our IO addresses are virtual, not actual physical (IOVA as VA > > > > - * case), then no page shift needed - our memory allocation will give us > > > > - * contiguous IO memory as far as the hardware is concerned, so > > > > - * act as if we're getting contiguous memory. > > > > + * mempool reserved space to be contiguous, pg_sz will be != 0, > > > > + * and the default ops->populate() will take care of not placing > > > > + * objects across pages. > > > > * > > > > * if our IO addresses are physical, we may get memory from bigger > > > > * pages, or we might get memory from smaller pages, and how much > > > > of it @@ -504,11 +500,6 @@ rte_mempool_populate_default(struct > > > > rte_mempool *mp) > > > > * > > > > * If we fail to get enough contiguous memory, then we'll go and > > > > * reserve space in smaller chunks. > > > > - * > > > > - * We also have to take into account the fact that memory that we're > > > > - * going to allocate from can belong to an externally allocated memory > > > > - * area, in which case the assumption of IOVA as VA mode being > > > > - * synonymous with IOVA contiguousness will not hold. > > > > */ > > > > > > > > need_iova_contig_obj = !(mp->flags & > > > > MEMPOOL_F_NO_IOVA_CONTIG); diff --git > > > > a/lib/librte_mempool/rte_mempool_ops_default.c > > > > b/lib/librte_mempool/rte_mempool_ops_default.c > > > > index f6aea7662..e5cd4600f 100644 > > > > --- a/lib/librte_mempool/rte_mempool_ops_default.c > > > > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > > > > @@ -61,21 +61,47 @@ rte_mempool_op_calc_mem_size_default(const struct > > > > rte_mempool *mp, > > > > return mem_size; > > > > } > > > > > > > > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > > > > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > > > > + if (pg_sz == 0) > > > > + return 0; > > > > + if (elt_sz > pg_sz) > > > > + return 0; > > > > + if (RTE_PTR_ALIGN(obj, pg_sz) != RTE_PTR_ALIGN(obj + elt_sz - 1, > > > > pg_sz)) > > > > + return -1; > > > > + return 0; > > > > +} > > > > + > > > > int > > > > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > > > > max_objs, > > > > void *vaddr, rte_iova_t iova, size_t len, > > > > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { > > > > - size_t total_elt_sz; > > > > + char *va = vaddr; > > > > + size_t total_elt_sz, pg_sz; > > > > size_t off; > > > > unsigned int i; > > > > void *obj; > > > > + int ret; > > > > + > > > > + ret = rte_mempool_get_page_size(mp, &pg_sz); > > > > + if (ret < 0) > > > > + return ret; > > > > > > > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > > > > > > - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > > > > + for (off = 0, i = 0; i < max_objs; i++) { > > > > + /* avoid objects to cross page boundaries */ > > > > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > > > > + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > > > off); > > > > + > > > > + if (off + total_elt_sz > len) > > > > + break; > > > > + > > > > off += mp->header_size; > > > > - obj = (char *)vaddr + off; > > > > + obj = va + off; > > > > obj_cb(mp, obj_cb_arg, obj, > > > > (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > > > > rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > > > > -- > > > > 2.20.1 >