From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3C1BCA9EC5 for ; Wed, 30 Oct 2019 10:38:31 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 8BF88204FD for ; Wed, 30 Oct 2019 10:38:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=6wind.com header.i=@6wind.com header.b="G+W6zqYi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BF88204FD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=6wind.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A42221BFA6; Wed, 30 Oct 2019 11:38:30 +0100 (CET) Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by dpdk.org (Postfix) with ESMTP id 7718C1BF1C for ; Wed, 30 Oct 2019 11:38:29 +0100 (CET) Received: by mail-wm1-f68.google.com with SMTP id q70so1537745wme.1 for ; Wed, 30 Oct 2019 03:38:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=n/Cn0P6jP6V6HdwTKK4fyXgGMhTfi0Zaqu1SYMCrERk=; b=G+W6zqYiX44CKMNNRKqWD1VL1IYCU5S+iE37Drc07rVrPz/9R6oTcnuxGmpCA0aeJG 7JSjOjIIsXX1+xWAgoL8GECUK6AQHsVVpANqOvz2xEl3/GdZs2aSdAanGR32SLUD++67 eGmMJGHUD7/dDzmdjq5RuQlXG2weVcNPt9SUmswztoKKZSfQq+CzR6apHgBQox6jskQ8 bGpe2rFZVDt6cS1ywz7HKILvD79gz8PTVBpCGonvai/sArmDIMRvil1ye0QpjGGw4yWS nXqCJdlurFQHRAzTyoURPqCSS7C5hNqLBTGuGrflKBD3LRfNO+YBAC2CaO6yNYtJgmTZ /xZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=n/Cn0P6jP6V6HdwTKK4fyXgGMhTfi0Zaqu1SYMCrERk=; b=HuPfuNNKIC/o+W4SDkT4KOLZvB3Ui2VGnpFzHsa0B28xWORHLwngQrT+RDYtrA2s6o MYWConl8ub1eDMwnqHDClJ21X0lifKV3+YX252wsqjkN7vQkB18VG8MAybhkmctb6VIC nGVvm6nFUT1SnnaFqdpBAVgQbJqBxaubqvUextJLwZHW14LJZi5k9ayPBD1nIF7yuLsk teEIegYZQZGZKSfDQVbLlEkTjLa4skuD7kxJLcw+gLz5m1EzqxdAr/JjHe5JlPacFCKG wYONo+Zs2wK0OoLs/wSJsqP4UTwVup78ABQ5GJ0arb3cVieqrsvyCgaxn7s2hAMRgqb/ ZCbg== X-Gm-Message-State: APjAAAUeA6vyUCLK2LoI0zgGkLGT3KgV43QGal+/NAgIpSAQRkUO7D0s UFvmP1Vw+TKOiQ1QuILi32p/XA== X-Google-Smtp-Source: APXvYqz+l/rtRQcvnS1ulvv8O1PdtjsX2nVUGWGCNGHEq/zABTf2hQ1lRaHEzkM2pGYSkykGg0ZAJQ== X-Received: by 2002:a1c:720a:: with SMTP id n10mr8125191wmc.20.1572431909108; Wed, 30 Oct 2019 03:38:29 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id o15sm2051503wrv.76.2019.10.30.03.38.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Oct 2019 03:38:28 -0700 (PDT) Date: Wed, 30 Oct 2019 11:38:27 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: dev@dpdk.org, Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru Message-ID: <20191030103827.xvsywc2ffup4dqym@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-4-olivier.matz@6wind.com> <24990d86-4ef8-4dce-113c-b824fe55e3f5@solarflare.com> <20191029172018.wvz57dbuhyy4bd4k@platinum> <79e7adee-32c0-2559-3aec-6d311f3dddfa@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [PATCH 3/5] mempool: remove optimistic IOVA-contiguous allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Oct 30, 2019 at 10:44:04AM +0300, Andrew Rybchenko wrote: > On 10/30/19 10:36 AM, Andrew Rybchenko wrote: > > On 10/29/19 8:20 PM, Olivier Matz wrote: > > > On Tue, Oct 29, 2019 at 01:25:10PM +0300, Andrew Rybchenko wrote: > > > > On 10/28/19 5:01 PM, Olivier Matz wrote: > > > > > The previous commit reduced the amount of required memory when > > > > > populating the mempool with non iova-contiguous memory. > > > > > > > > > > Since there is no big advantage to have a fully > > > > > iova-contiguous mempool > > > > > if it is not explicitly asked, remove this code, it simplifies the > > > > > populate function. > > > > > > > > > > Signed-off-by: Olivier Matz > > > > One comment below, other than that > > > > Reviewed-by: Andrew Rybchenko > > > > > > > > > --- > > > > >    lib/librte_mempool/rte_mempool.c | 47 > > > > > ++++++-------------------------- > > > > >    1 file changed, 8 insertions(+), 39 deletions(-) > > > > > > > > > > diff --git a/lib/librte_mempool/rte_mempool.c > > > > > b/lib/librte_mempool/rte_mempool.c > > > > > index 3275e48c9..5e1f202dc 100644 > > > > > --- a/lib/librte_mempool/rte_mempool.c > > > > > +++ b/lib/librte_mempool/rte_mempool.c > > > > [snip] > > > > > > > > > @@ -531,36 +521,15 @@ rte_mempool_populate_default(struct > > > > > rte_mempool *mp) > > > > >                goto fail; > > > > >            } > > > > > -        flags = mz_flags; > > > > > - > > > > >            /* if we're trying to reserve contiguous memory, > > > > > add appropriate > > > > >             * memzone flag. > > > > >             */ > > > > > -        if (try_iova_contig_mempool) > > > > > -            flags |= RTE_MEMZONE_IOVA_CONTIG; > > > > > +        if (min_chunk_size == (size_t)mem_size) > > > > > +            mz_flags |= RTE_MEMZONE_IOVA_CONTIG; > > > > >            mz = rte_memzone_reserve_aligned(mz_name, mem_size, > > > > > -                mp->socket_id, flags, align); > > > > > - > > > > > -        /* if we were trying to allocate contiguous memory, > > > > > failed and > > > > > -         * minimum required contiguous chunk fits minimum > > > > > page, adjust > > > > > -         * memzone size to the page size, and try again. > > > > > -         */ > > > > > -        if (mz == NULL && try_iova_contig_mempool && > > > > > -                min_chunk_size <= pg_sz) { > > > > > -            try_iova_contig_mempool = false; > > > > > -            flags &= ~RTE_MEMZONE_IOVA_CONTIG; > > > > > - > > > > > -            mem_size = rte_mempool_ops_calc_mem_size(mp, n, > > > > > -                    pg_shift, &min_chunk_size, &align); > > > > > -            if (mem_size < 0) { > > > > > -                ret = mem_size; > > > > > -                goto fail; > > > > > -            } > > > > > +                mp->socket_id, mz_flags, align); > > > > > -            mz = rte_memzone_reserve_aligned(mz_name, mem_size, > > > > > -                mp->socket_id, flags, align); > > > > > -        } > > > > >            /* don't try reserving with 0 size if we were > > > > > asked to reserve > > > > >             * IOVA-contiguous memory. > > > > >             */ > > > > [snip] > > > > > > > > > @@ -587,7 +556,7 @@ rte_mempool_populate_default(struct > > > > > rte_mempool *mp) > > > > >            else > > > > >                iova = RTE_BAD_IOVA; > > > > > -        if (try_iova_contig_mempool || pg_sz == 0) > > > > > +        if (pg_sz == 0) > > > > I think (mz_flags & RTE_MEMZONE_IOVA_CONTIG) is lost here. > > > Do you mean we should do instead > > >     if (pg_sz == 0 || (mz_flags & RTE_MEMZONE_IOVA_CONTIG)) > > >         populate_iova() > > >     else > > >         populate_virt() > > > ? > > > > Yes. > > > > > I would say yes, but it should be removed in patch 5/5. > > > At the end, we don't want objects to be across pages, even > > > with RTE_MEMZONE_IOVA_CONTIG. > > Thinking more about it I don't understand why. If we know that > the memzone is IOVA contiguous, why we should not populate > it this way. It does not prevent patch 5/5 to do the job. You are right. The page-crossing check is done in the ops->populate, so it is also done by populate_iova(). > > > > >                ret = rte_mempool_populate_iova(mp, mz->addr, > > > > >                    iova, mz->len, > > > > >                    rte_mempool_memchunk_mz_free, >