From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 331BBCA9EAF for ; Wed, 30 Oct 2019 08:39:01 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id B0724205ED for ; Wed, 30 Oct 2019 08:39:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qY+QHGN1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0724205ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 92B4A1BE98; Wed, 30 Oct 2019 09:38:59 +0100 (CET) Received: from mail-vs1-f65.google.com (mail-vs1-f65.google.com [209.85.217.65]) by dpdk.org (Postfix) with ESMTP id 000FC2BE9 for ; Wed, 30 Oct 2019 09:38:57 +0100 (CET) Received: by mail-vs1-f65.google.com with SMTP id w25so1131102vso.4 for ; Wed, 30 Oct 2019 01:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=NHCYOgFsyBxsAnxc1UVu4qaAi93ViD+Vu36WgX89BYo=; b=qY+QHGN107uizDyh6Z2TsA1GSdoBNP3ZQD8r8XixjDiWupPCXSJeGzzLGKyGn2Q0Hh nZh+lPkm7n3a/mBJUa+tc24V3aFUJftbPODnNAwNqA59YAJznHT6Qt2pjlNH5Rm3y+7f yrMysZpH96uTJOnAIOuTt2snUUdQqm7+ZIpcIuXLABVC7uTbRafDSqiogtZz05qy68A5 8p87kQhyGP39pgBk1Zf/9TAGQS4LEa49IhmiFBAVQTOITEAO1qoYlbNw7LIDv/o5WBdS AV77Cmg4Fqrp9Qh3bPhF1jSj8xe2V0FjtMjRX9iB4/0ZzmVVyhIrpsssD5lRYAIunpOc C46w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NHCYOgFsyBxsAnxc1UVu4qaAi93ViD+Vu36WgX89BYo=; b=o6cpqgWMneP+AvIbIgqH7L5hhqtXn+8cmJsu6rlWdHkxxljqAOYfQeMCm26aDrCc6k Xp6IqHdQwC9o6xLH6s1bpkEro4bfxRdBNU2Vh05D3PhbUs9L2CHylKT/KizClB8TdkcE IX7G01tZq1YtpyePR6B2hFy/qr6PRYDp1iNBIDxyB7fEi5HIL0/eHwoSnVMMZvXNJMFr 0/aTisnoElYPawK5UVKWlS8LJTkR3OG77NHSez3HvpHr72J6NcWIUA4S4qRKZZObAaEP VeTI4HqlQWqEakXh1/3BsZmMedjKoTN9ElKrB7tC5dLlJrM5Iu6m/l83WodLMLXnKHr+ hh+A== X-Gm-Message-State: APjAAAXWGVh9hjXR6vUCTtcYe10ObZaxzqpFnj7PW3pE3VHAlw75oNI6 E7gfceKNzeCDeE77SkpqQPKl3U3RYgjUDx2At1U= X-Google-Smtp-Source: APXvYqz804mLXARHNkyUT7pNprOaNdtIReqpxNHiCRIAAjkswg3PCvYS+2jQ9aubCw8C++IpELpqQTk1oC9inKLCN3k= X-Received: by 2002:a67:6bc1:: with SMTP id g184mr4179346vsc.161.1572424737059; Wed, 30 Oct 2019 01:38:57 -0700 (PDT) MIME-Version: 1.0 References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-6-olivier.matz@6wind.com> <2cbd66e8-7551-4eaf-6097-8ac60ea9b61e@solarflare.com> In-Reply-To: <2cbd66e8-7551-4eaf-6097-8ac60ea9b61e@solarflare.com> From: Jerin Jacob Date: Wed, 30 Oct 2019 14:08:40 +0530 Message-ID: To: Andrew Rybchenko Cc: Vamsi Krishna Attunuru , Olivier Matz , "dev@dpdk.org" , Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [EXT] [PATCH 5/5] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Oct 30, 2019 at 1:16 PM Andrew Rybchenko wrote: > > >> int > >> rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > >> max_objs, > >> void *vaddr, rte_iova_t iova, size_t len, > >> rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > >> { > >> - size_t total_elt_sz; > >> + char *va = vaddr; > >> + size_t total_elt_sz, pg_sz; > >> size_t off; > >> unsigned int i; > >> void *obj; > >> > >> + rte_mempool_get_page_size(mp, &pg_sz); > >> + > >> total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > >> > >> - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > >> + for (off = 0, i = 0; i < max_objs; i++) { > >> + /* align offset to next page start if required */ > >> + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > >> + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > >> off); > > Moving offset to the start of next page and than freeing (vaddr + off + header_size) to pool, this scheme is not aligning with octeontx2 mempool's buf alignment requirement(buffer address needs to be multiple of buffer size). > > It sounds line octeontx2 mempool should have its own populate callback > which cares about it. Driver specific populate function is not a bad idea. The only concern would be to # We need to duplicate rte_mempool_op_populate_default() and rte_mempool_op_calc_mem_size_default() # We need to make sure if some one changes the rte_mempool_op_populate_default() and rte_mempool_op_calc_mem_size_default() then he/she needs to update the drivers too # I would like to add one more point here is that: - Calculation of object pad requirements for MEMPOOL_F_NO_SPREAD i.e optimize_object_size() is NOT GENERIC. i.e get_gcd() based logic is not generic. DDR controller defines the address to DDR channel "spread" and it will be based on SoC or Mirco architecture. So we need to consider that as well.