From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shreyansh Jain Subject: Re: [PATCH v8 1/3] mempool: support external mempool operations Date: Thu, 9 Jun 2016 11:49:44 +0000 Message-ID: References: <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464965906-108927-1-git-send-email-david.hunt@intel.com> <1464965906-108927-2-git-send-email-david.hunt@intel.com> <5756931C.70805@intel.com> <57593962.6010802@intel.com> <20160609103126.GA27529@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "dev@dpdk.org" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" To: Jerin Jacob , "Hunt, David" Return-path: Received: from emea01-am1-obe.outbound.protection.outlook.com (mail-am1on0061.outbound.protection.outlook.com [157.56.112.61]) by dpdk.org (Postfix) with ESMTP id ECBEC29C6 for ; Thu, 9 Jun 2016 13:49:49 +0200 (CEST) In-Reply-To: <20160609103126.GA27529@localhost.localdomain> Content-Language: en-US List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Jerin, > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Thursday, June 09, 2016 4:02 PM > To: Hunt, David > Cc: Shreyansh Jain ; dev@dpdk.org; > olivier.matz@6wind.com; viktorin@rehivetech.com > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool > operations >=20 > On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote: > > Hi Shreyansh, > > > > On 8/6/2016 2:48 PM, Shreyansh Jain wrote: > > > Hi David, > > > > > > Thanks for explanation. I have some comments inline... > > > > > > > -----Original Message----- > > > > From: Hunt, David [mailto:david.hunt@intel.com] > > > > Sent: Tuesday, June 07, 2016 2:56 PM > > > > To: Shreyansh Jain ; dev@dpdk.org > > > > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com; > > > > jerin.jacob@caviumnetworks.com > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external > mempool > > > > operations > > > > > > > > Hi Shreyansh, > > > > > > > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote: > > > > > Hi, > > > > > > > > > > (Apologies for overly-eager email sent on this thread earlier. Wi= ll > be more > > > > careful in future). > > > > > This is more of a question/clarification than a comment. (And I h= ave > taken > > > > only some snippets from original mail to keep it cleaner) > > > > > > > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc); > > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc); > > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc); > > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc); > > > > > > > > > > > > > > > From the above what I understand is that multiple packet pool > handlers can > > > > be created. > > > > > I have a use-case where application has multiple pools but only t= he > packet > > > > pool is hardware backed. Using the hardware for general buffer > requirements > > > > would prove costly. > > > > > From what I understand from the patch, selection of the pool is > based on > > > > the flags below. > > > > > > > > The flags are only used to select one of the default handlers for > > > > backward compatibility through > > > > the rte_mempool_create call. If you wish to use a mempool handler t= hat > > > > is not one of the > > > > defaults, (i.e. a new hardware handler), you would use the > > > > rte_create_mempool_empty > > > > followed by the rte_mempool_set_ops_byname call. > > > > So, for the external handlers, you create and empty mempool, then s= et > > > > the operations (ops) > > > > for that particular mempool. > > > I am concerned about the existing applications (for example, l3fwd). > > > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byna= me' > model would require modifications to these applications. > > > Ideally, without any modifications, these applications should be able= to > use packet pools (backed by hardware) and buffer pools (backed by > ring/others) - transparently. > > > > > > If I go by your suggestions, what I understand is, doing the above > without modification to applications would be equivalent to: > > > > > > struct rte_mempool_ops custom_hw_allocator =3D {...} > > > > > > thereafter, in config/common_base: > > > > > > CONFIG_RTE_DEFAULT_MEMPOOL_OPS=3D"custom_hw_allocator" > > > > > > calls to rte_pktmbuf_pool_create would use the new allocator. > > > > Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to > > rte_mempool_create will continue to use the default handlers (ring base= d). > > > But, another problem arises here. > > > > > > There are two distinct paths for allocations of a memory pool: > > > 1. A 'pkt' pool: > > > rte_pktmbuf_pool_create > > > \- rte_mempool_create_empty > > > | \- rte_mempool_set_ops_byname(..ring_mp_mc..) > > > | > > > `- rte_mempool_set_ops_byname > > > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..) > > > /* Override default 'ring_mp_mc' of > > > * rte_mempool_create */ > > > > > > 2. Through generic mempool create API > > > rte_mempool_create > > > \- rte_mempool_create_empty > > > (passing pktmbuf and pool constructors) > > > I found various instances in example applications where > rte_mempool_create() is being called directly for packet pools - bypassin= g > the more semantically correct call to rte_pktmbuf_* for packet pools. > > > > > > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able t= o > replace custom handler operations for packet buffer allocations. > > > > > > From a performance point-of-view, Applications should be able to sel= ect > between packet pools and non-packet pools. > > > > This is intended for backward compatibility, and API consistency. Any > > applications that use > > rte_mempool_create directly will continue to use the default mempool > > handlers. If the need > > to use a custeom hander, they will need to be modified to call the newe= r > > API, > > rte_mempool_create_empty and rte_mempool_set_ops_byname. > > > > > > > > > > > > > > > + /* > > > > > > + * Since we have 4 combinations of the SP/SC/MP/MC examine th= e > flags to > > > > > > + * set the correct index into the table of ops structs. > > > > > > + */ > > > > > > + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) > > > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_sc"); > > > > > > + else if (flags & MEMPOOL_F_SP_PUT) > > > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_mc"); > > > > > > + else if (flags & MEMPOOL_F_SC_GET) > > > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_sc"); > > > > > > + else > > > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_mc"); > > > > > > + > > > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', > which, if specified, would: > > > > > > ... > > > #define MEMPOOL_F_SC_GET 0x0008 > > > #define MEMPOOL_F_PKT_ALLOC 0x0010 > > > ... > > > > > > in rte_mempool_create_empty: > > > ... after checking the other MEMPOOL_F_* flags... > > > > > > if (flags & MEMPOOL_F_PKT_ALLOC) > > > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS) > > > > > > And removing the redundant call to rte_mempool_set_ops_byname() in > rte_pktmbuf_create_pool(). > > > > > > Thereafter, rte_pktmbuf_pool_create can be changed to: > > > > > > ... > > > mp =3D rte_mempool_create_empty(name, n, elt_size, cache_size, > > > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); > > > + sizeof(struct rte_pktmbuf_pool_private), socket_id, > > > + MEMPOOL_F_PKT_ALLOC); > > > if (mp =3D=3D NULL) > > > return NULL; > > > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in th= at > it > > replaces > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure w= e > > want > > to introduce a third method of creating a mempool to the developers. If= we > > introduced this, we would then have: > > 1. rte_pktmbuf_pool_create() > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would > > use the configured custom handler) > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set foll= owed > > by a call to rte_mempool_set_ops_byname() (would allow several diffe= rent > > custom > > handlers to be used in one application > > > > Does anyone else have an opinion on this? Oliver, Jerin, Jan? >=20 > As I mentioned earlier, My take is not to create the separate API's for > external mempool handlers.In my view, It's same, just that sepreate > mempool handler through function pointers. >=20 > To keep the backward compatibility, I think we can extend the flags > in rte_mempool_create and have a single API external/internal pool > creation(makes easy for existing applications too, add a just mempool > flag command line argument to existing applications to choose the > mempool handler) May be I am interpreting it wrong, but, are you suggesting a single mempool= handler for all buffer/packet needs of an application (passed as command l= ine argument)? That would be inefficient especially for cases where pool is backed by a ha= rdware. The application wouldn't want its generic buffers to consume hardwa= re resource which would be better used for packets. I was hoping that external mempool handler would help segregate such use-ca= ses and allow applications to tap into accelerated packet processors. Do correct me if I my understanding is wrong. >=20 > Jerin >=20 > > > > Regards, > > Dave. > > > > - Shreyansh