DPDK-dev Archive on lore.kernel.org
 help / color / Atom feed
* Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
@ 2019-07-12 11:37 Jerin Jacob Kollanukkaran
  2019-07-12 12:09 ` Burakov, Anatoly
  0 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-07-12 11:37 UTC (permalink / raw)
  To: Burakov, Anatoly, Ferruh Yigit, Vamsi Krishna Attunuru, dev
  Cc: olivier.matz, arybchenko

> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Friday, July 12, 2019 4:19 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com
> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
> On 12-Jul-19 11:26 AM, Jerin Jacob Kollanukkaran wrote:
> >>>> What do you think?
> >>>
> >>> IMO, If possible we can avoid extra indirection of new config. In
> >>> worst case We can add it. How about following to not have new config
> >>>
> >>> 1) Make MEMPOOL_F_NO_PAGE_BOUND  as default
> >>> http://patches.dpdk.org/patch/55277/
> >>> There is absolutely zero overhead of this flag considering the huge
> >>> page size are minimum 2MB. Typically 512MB or 1GB.
> >>> Any one has any objection?
> >>
> >> Pretty much zero overhead in hugepage case, not so in non-hugepage
> case.
> >> It's rare, but since we support it, we have to account for it.
> >
> > That is a fair concern.
> > How about enable the flag in mempool ONLY when
> rte_eal_has_hugepages()
> > In the common layer?
> 
> Perhaps it's better to check page size of the underlying memory, because 4K
> pages are not necessarily no-huge mode - they could also be external
> memory. That's going to be a bit hard because there may not be a way to
> know which memory we're allocating from in advance, aside from simple
> checks like `(rte_eal_has_hugepages() ||
> rte_malloc_heap_socket_is_external(socket_id))` - but maybe those would
> be sufficient.

Yes.


> 
> >
> >> (also, i don't really like the name NO_PAGE_BOUND since in memzone
> >> API there's a "bounded memzone" allocation API, and this flag's name
> >> reads like objects would not be bounded by page size, not that they
> >> won't cross page
> >> boundary)
> >
> > No strong opinion for the name. What name you suggest?
> 
> How about something like MEMPOOL_F_NO_PAGE_SPLIT?

Looks good to me.

In summary, Change wrt existing patch"
- Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
- Set this flag in  rte_pktmbuf_pool_create() when rte_eal_has_hugepages() ||
 rte_malloc_heap_socket_is_external(socket_id))

Olivier, Any objection?
Ref: http://patches.dpdk.org/patch/55277/

> 
> >
> >>
> >>>
> >>> 2) Introduce rte_kni_mempool_create() API in kni lib to abstract the
> >>> Mempool requirement for KNI. This will enable portable KNI
> applications.
> >>
> >> This means that using KNI is not a drop-in replacement for any other
> >> PMD. If maintainers of KNI are OK with this then sure :)
> >
> > The PMD  don’t have any dependency on NO_PAGE_BOUND flag. Right?
> > If KNI app is using rte_kni_mempool_create() to create the mempool, In
> > what case do you see problem with specific PMD?
> 
> I'm not saying the PMD's have a dependency on the flag, i'm saying that the
> same code cannot be used with and without KNI because you need to call a
> separate API for mempool creation if you want to use it with KNI.

Yes. Need to call the introduced API from 19.08. If we not choose above(first) approach.
It can be documented in "API changes" in release notes. I prefer to have the first 
solution if there is no downside.


> For KNI, the underlying memory must abide by certain constraints that are
> not there for other PMD's, so either you fix all memory to these constraints,
> or you lose the ability to reuse the code with other PMD's as is.
> 
> That is, unless i'm grossly misunderstanding what you're suggesting here :)
> 
> >
> >>
> >> --
> >> Thanks,
> >> Anatoly
> 
> 
> --
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-12 11:37 [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI Jerin Jacob Kollanukkaran
@ 2019-07-12 12:09 ` Burakov, Anatoly
  2019-07-12 12:28   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
  0 siblings, 1 reply; 9+ messages in thread
From: Burakov, Anatoly @ 2019-07-12 12:09 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, Ferruh Yigit, Vamsi Krishna Attunuru, dev
  Cc: olivier.matz, arybchenko

On 12-Jul-19 12:37 PM, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: Burakov, Anatoly <anatoly.burakov@intel.com>
>> Sent: Friday, July 12, 2019 4:19 PM
>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
>> <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru
>> <vattunuru@marvell.com>; dev@dpdk.org
>> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com
>> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
>> On 12-Jul-19 11:26 AM, Jerin Jacob Kollanukkaran wrote:
>>>>>> What do you think?
>>>>>
>>>>> IMO, If possible we can avoid extra indirection of new config. In
>>>>> worst case We can add it. How about following to not have new config
>>>>>
>>>>> 1) Make MEMPOOL_F_NO_PAGE_BOUND  as default
>>>>> http://patches.dpdk.org/patch/55277/
>>>>> There is absolutely zero overhead of this flag considering the huge
>>>>> page size are minimum 2MB. Typically 512MB or 1GB.
>>>>> Any one has any objection?
>>>>
>>>> Pretty much zero overhead in hugepage case, not so in non-hugepage
>> case.
>>>> It's rare, but since we support it, we have to account for it.
>>>
>>> That is a fair concern.
>>> How about enable the flag in mempool ONLY when
>> rte_eal_has_hugepages()
>>> In the common layer?
>>
>> Perhaps it's better to check page size of the underlying memory, because 4K
>> pages are not necessarily no-huge mode - they could also be external
>> memory. That's going to be a bit hard because there may not be a way to
>> know which memory we're allocating from in advance, aside from simple
>> checks like `(rte_eal_has_hugepages() ||
>> rte_malloc_heap_socket_is_external(socket_id))` - but maybe those would
>> be sufficient.
> 
> Yes.
> 
> 
>>
>>>
>>>> (also, i don't really like the name NO_PAGE_BOUND since in memzone
>>>> API there's a "bounded memzone" allocation API, and this flag's name
>>>> reads like objects would not be bounded by page size, not that they
>>>> won't cross page
>>>> boundary)
>>>
>>> No strong opinion for the name. What name you suggest?
>>
>> How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> 
> Looks good to me.
> 
> In summary, Change wrt existing patch"
> - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> - Set this flag in  rte_pktmbuf_pool_create() when rte_eal_has_hugepages() ||
>   rte_malloc_heap_socket_is_external(socket_id))

If we are to have a special KNI allocation API, would we even need that?

> 
> Olivier, Any objection?
> Ref: http://patches.dpdk.org/patch/55277/
> 



-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-12 12:09 ` Burakov, Anatoly
@ 2019-07-12 12:28   ` Jerin Jacob Kollanukkaran
  2019-07-15  4:54     ` Jerin Jacob Kollanukkaran
  0 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-07-12 12:28 UTC (permalink / raw)
  To: Burakov, Anatoly, Ferruh Yigit, Vamsi Krishna Attunuru, dev
  Cc: olivier.matz, arybchenko

> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Friday, July 12, 2019 5:40 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com
> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 12-Jul-19 12:37 PM, Jerin Jacob Kollanukkaran wrote:
> >> -----Original Message-----
> >> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> >> Sent: Friday, July 12, 2019 4:19 PM
> >> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> >> <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru
> >> <vattunuru@marvell.com>; dev@dpdk.org
> >> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com
> >> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in
> >> KNI On 12-Jul-19 11:26 AM, Jerin Jacob Kollanukkaran wrote:
> >>>>>> What do you think?
> >>>>>
> >>>>> IMO, If possible we can avoid extra indirection of new config. In
> >>>>> worst case We can add it. How about following to not have new
> >>>>> config
> >>>>>
> >>>>> 1) Make MEMPOOL_F_NO_PAGE_BOUND  as default
> >>>>> http://patches.dpdk.org/patch/55277/
> >>>>> There is absolutely zero overhead of this flag considering the
> >>>>> huge page size are minimum 2MB. Typically 512MB or 1GB.
> >>>>> Any one has any objection?
> >>>>
> >>>> Pretty much zero overhead in hugepage case, not so in non-hugepage
> >> case.
> >>>> It's rare, but since we support it, we have to account for it.
> >>>
> >>> That is a fair concern.
> >>> How about enable the flag in mempool ONLY when
> >> rte_eal_has_hugepages()
> >>> In the common layer?
> >>
> >> Perhaps it's better to check page size of the underlying memory,
> >> because 4K pages are not necessarily no-huge mode - they could also
> >> be external memory. That's going to be a bit hard because there may
> >> not be a way to know which memory we're allocating from in advance,
> >> aside from simple checks like `(rte_eal_has_hugepages() ||
> >> rte_malloc_heap_socket_is_external(socket_id))` - but maybe those
> >> would be sufficient.
> >
> > Yes.
> >
> >
> >>
> >>>
> >>>> (also, i don't really like the name NO_PAGE_BOUND since in memzone
> >>>> API there's a "bounded memzone" allocation API, and this flag's
> >>>> name reads like objects would not be bounded by page size, not that
> >>>> they won't cross page
> >>>> boundary)
> >>>
> >>> No strong opinion for the name. What name you suggest?
> >>
> >> How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> >
> > Looks good to me.
> >
> > In summary, Change wrt existing patch"
> > - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> > - Set this flag in  rte_pktmbuf_pool_create () when
> rte_eal_has_hugepages() ||
> >   rte_malloc_heap_socket_is_external(socket_id))
> 
> If we are to have a special KNI allocation API, would we even need that?

Not need this change in rte_pktmbuf_pool_create () if we introduce
a new rte_kni_pktmbuf_pool_create () API.

> 
> >
> > Olivier, Any objection?
> > Ref: http://patches.dpdk.org/patch/55277/
> >
> 
> 
> 
> --
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-12 12:28   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
@ 2019-07-15  4:54     ` Jerin Jacob Kollanukkaran
  2019-07-15  9:38       ` Burakov, Anatoly
  0 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-07-15  4:54 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, Burakov, Anatoly, Ferruh Yigit,
	Vamsi Krishna Attunuru, dev
  Cc: olivier.matz, arybchenko

> > >>>> (also, i don't really like the name NO_PAGE_BOUND since in
> > >>>> memzone API there's a "bounded memzone" allocation API, and this
> > >>>> flag's name reads like objects would not be bounded by page size,
> > >>>> not that they won't cross page
> > >>>> boundary)
> > >>>
> > >>> No strong opinion for the name. What name you suggest?
> > >>
> > >> How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> > >
> > > Looks good to me.
> > >
> > > In summary, Change wrt existing patch"
> > > - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> > > - Set this flag in  rte_pktmbuf_pool_create () when
> > rte_eal_has_hugepages() ||
> > >   rte_malloc_heap_socket_is_external(socket_id))
> >
> > If we are to have a special KNI allocation API, would we even need that?
> 
> Not need this change in rte_pktmbuf_pool_create () if we introduce a new
> rte_kni_pktmbuf_pool_create () API.

Ferruh, Olivier, Anatoly,

Any objection to create new rte_kni_pktmbuf_pool_create () API 
to embedded MEMPOOL_F_NO_PAGE_SPLIT flag requirement for KNI + IOVA as VA



> 
> >
> > >
> > > Olivier, Any objection?
> > > Ref: http://patches.dpdk.org/patch/55277/
> > >
> >
> >
> >
> > --
> > Thanks,
> > Anatoly

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-15  4:54     ` Jerin Jacob Kollanukkaran
@ 2019-07-15  9:38       ` Burakov, Anatoly
  2019-07-16  8:46         ` Olivier Matz
  0 siblings, 1 reply; 9+ messages in thread
From: Burakov, Anatoly @ 2019-07-15  9:38 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, Ferruh Yigit, Vamsi Krishna Attunuru, dev
  Cc: olivier.matz, arybchenko

On 15-Jul-19 5:54 AM, Jerin Jacob Kollanukkaran wrote:
>>>>>>> (also, i don't really like the name NO_PAGE_BOUND since in
>>>>>>> memzone API there's a "bounded memzone" allocation API, and this
>>>>>>> flag's name reads like objects would not be bounded by page size,
>>>>>>> not that they won't cross page
>>>>>>> boundary)
>>>>>>
>>>>>> No strong opinion for the name. What name you suggest?
>>>>>
>>>>> How about something like MEMPOOL_F_NO_PAGE_SPLIT?
>>>>
>>>> Looks good to me.
>>>>
>>>> In summary, Change wrt existing patch"
>>>> - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
>>>> - Set this flag in  rte_pktmbuf_pool_create () when
>>> rte_eal_has_hugepages() ||
>>>>    rte_malloc_heap_socket_is_external(socket_id))
>>>
>>> If we are to have a special KNI allocation API, would we even need that?
>>
>> Not need this change in rte_pktmbuf_pool_create () if we introduce a new
>> rte_kni_pktmbuf_pool_create () API.
> 
> Ferruh, Olivier, Anatoly,
> 
> Any objection to create new rte_kni_pktmbuf_pool_create () API
> to embedded MEMPOOL_F_NO_PAGE_SPLIT flag requirement for KNI + IOVA as VA
> 
> 

As long as we all are aware of what that means and agree with that 
consequence (namely, separate codepaths for KNI and other PMD's) then i 
have no specific objections.

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-15  9:38       ` Burakov, Anatoly
@ 2019-07-16  8:46         ` Olivier Matz
  2019-07-16  9:40           ` Vamsi Krishna Attunuru
  0 siblings, 1 reply; 9+ messages in thread
From: Olivier Matz @ 2019-07-16  8:46 UTC (permalink / raw)
  To: Burakov, Anatoly
  Cc: Jerin Jacob Kollanukkaran, Ferruh Yigit, Vamsi Krishna Attunuru,
	dev, arybchenko

Hi,

On Mon, Jul 15, 2019 at 10:38:53AM +0100, Burakov, Anatoly wrote:
> On 15-Jul-19 5:54 AM, Jerin Jacob Kollanukkaran wrote:
> > > > > > > > (also, i don't really like the name NO_PAGE_BOUND since in
> > > > > > > > memzone API there's a "bounded memzone" allocation API, and this
> > > > > > > > flag's name reads like objects would not be bounded by page size,
> > > > > > > > not that they won't cross page
> > > > > > > > boundary)
> > > > > > > 
> > > > > > > No strong opinion for the name. What name you suggest?
> > > > > > 
> > > > > > How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> > > > > 
> > > > > Looks good to me.
> > > > > 
> > > > > In summary, Change wrt existing patch"
> > > > > - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> > > > > - Set this flag in  rte_pktmbuf_pool_create () when
> > > > rte_eal_has_hugepages() ||
> > > > >    rte_malloc_heap_socket_is_external(socket_id))
> > > > 
> > > > If we are to have a special KNI allocation API, would we even need that?
> > > 
> > > Not need this change in rte_pktmbuf_pool_create () if we introduce a new
> > > rte_kni_pktmbuf_pool_create () API.
> > 
> > Ferruh, Olivier, Anatoly,
> > 
> > Any objection to create new rte_kni_pktmbuf_pool_create () API
> > to embedded MEMPOOL_F_NO_PAGE_SPLIT flag requirement for KNI + IOVA as VA
> > 
> > 
> 
> As long as we all are aware of what that means and agree with that
> consequence (namely, separate codepaths for KNI and other PMD's) then i have
> no specific objections.

Sorry for the late feedback.

I think we can change the default behavior of mempool populate(), to
prevent objects from being accross 2 pages, except if the size of the
object is bigger than the size of the page. This is already what is done
in rte_mempool_op_calc_mem_size_default() when we want to estimate the
amount of memory needed to allocate N objects.

This would avoid the introduction of a specific API to allocate packets
for kni, and a specific mempool flag.

About the problem of 9K mbuf mentionned by Anatoly, could we imagine a
check in kni code, that just returns an error "does not work with
size(mbuf) > size(page)" ?

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-16  8:46         ` Olivier Matz
@ 2019-07-16  9:40           ` Vamsi Krishna Attunuru
  2019-07-16  9:55             ` Olivier Matz
  0 siblings, 1 reply; 9+ messages in thread
From: Vamsi Krishna Attunuru @ 2019-07-16  9:40 UTC (permalink / raw)
  To: Olivier Matz, Burakov, Anatoly
  Cc: Jerin Jacob Kollanukkaran, Ferruh Yigit, dev, arybchenko



> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Tuesday, July 16, 2019 2:17 PM
> To: Burakov, Anatoly <anatoly.burakov@intel.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>;
> dev@dpdk.org; arybchenko@solarflare.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
> 
> Hi,
> 
> On Mon, Jul 15, 2019 at 10:38:53AM +0100, Burakov, Anatoly wrote:
> > On 15-Jul-19 5:54 AM, Jerin Jacob Kollanukkaran wrote:
> > > > > > > > > (also, i don't really like the name NO_PAGE_BOUND since
> > > > > > > > > in memzone API there's a "bounded memzone" allocation
> > > > > > > > > API, and this flag's name reads like objects would not
> > > > > > > > > be bounded by page size, not that they won't cross page
> > > > > > > > > boundary)
> > > > > > > >
> > > > > > > > No strong opinion for the name. What name you suggest?
> > > > > > >
> > > > > > > How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> > > > > >
> > > > > > Looks good to me.
> > > > > >
> > > > > > In summary, Change wrt existing patch"
> > > > > > - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> > > > > > - Set this flag in  rte_pktmbuf_pool_create () when
> > > > > rte_eal_has_hugepages() ||
> > > > > >    rte_malloc_heap_socket_is_external(socket_id))
> > > > >
> > > > > If we are to have a special KNI allocation API, would we even need that?
> > > >
> > > > Not need this change in rte_pktmbuf_pool_create () if we introduce
> > > > a new rte_kni_pktmbuf_pool_create () API.
> > >
> > > Ferruh, Olivier, Anatoly,
> > >
> > > Any objection to create new rte_kni_pktmbuf_pool_create () API to
> > > embedded MEMPOOL_F_NO_PAGE_SPLIT flag requirement for KNI + IOVA
> as
> > > VA
> > >
> > >
> >
> > As long as we all are aware of what that means and agree with that
> > consequence (namely, separate codepaths for KNI and other PMD's) then
> > i have no specific objections.
> 
> Sorry for the late feedback.
> 
> I think we can change the default behavior of mempool populate(), to prevent
> objects from being accross 2 pages, except if the size of the object is bigger than
> the size of the page. This is already what is done in
> rte_mempool_op_calc_mem_size_default() when we want to estimate the
> amount of memory needed to allocate N objects.
> 
> This would avoid the introduction of a specific API to allocate packets for kni,
> and a specific mempool flag.
> 
> About the problem of 9K mbuf mentionned by Anatoly, could we imagine a
> check in kni code, that just returns an error "does not work with
> size(mbuf) > size(page)" ?
> 

Yes, change in default behavior avoids new APIs or flags.
Two minor changes on top of  above suggestions.
1) Can flag(NO_PAGE_SPLIT) be retained.?,  sequence is like,  flag is set by default in rte_mempool_populate_default()
and later it can be cleared based on obj_per_page in rte_mempool_op_calc_mem_size_default(). I do not see specific
requirement of these flag apart from handling above sequence.
2) For problems of 9k mbuf, I think that check could be addressed in kni lib(in rte_kni_init and return error).

> Thanks,
> Olivier

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-16  9:40           ` Vamsi Krishna Attunuru
@ 2019-07-16  9:55             ` Olivier Matz
  2019-07-16 10:07               ` Vamsi Krishna Attunuru
  0 siblings, 1 reply; 9+ messages in thread
From: Olivier Matz @ 2019-07-16  9:55 UTC (permalink / raw)
  To: Vamsi Krishna Attunuru
  Cc: Burakov, Anatoly, Jerin Jacob Kollanukkaran, Ferruh Yigit, dev,
	arybchenko

Hi,

On Tue, Jul 16, 2019 at 09:40:59AM +0000, Vamsi Krishna Attunuru wrote:
> 
> 
> > -----Original Message-----
> > From: Olivier Matz <olivier.matz@6wind.com>
> > Sent: Tuesday, July 16, 2019 2:17 PM
> > To: Burakov, Anatoly <anatoly.burakov@intel.com>
> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> > <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>;
> > dev@dpdk.org; arybchenko@solarflare.com
> > Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
> > 
> > Hi,
> > 
> > On Mon, Jul 15, 2019 at 10:38:53AM +0100, Burakov, Anatoly wrote:
> > > On 15-Jul-19 5:54 AM, Jerin Jacob Kollanukkaran wrote:
> > > > > > > > > > (also, i don't really like the name NO_PAGE_BOUND since
> > > > > > > > > > in memzone API there's a "bounded memzone" allocation
> > > > > > > > > > API, and this flag's name reads like objects would not
> > > > > > > > > > be bounded by page size, not that they won't cross page
> > > > > > > > > > boundary)
> > > > > > > > >
> > > > > > > > > No strong opinion for the name. What name you suggest?
> > > > > > > >
> > > > > > > > How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> > > > > > >
> > > > > > > Looks good to me.
> > > > > > >
> > > > > > > In summary, Change wrt existing patch"
> > > > > > > - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> > > > > > > - Set this flag in  rte_pktmbuf_pool_create () when
> > > > > > rte_eal_has_hugepages() ||
> > > > > > >    rte_malloc_heap_socket_is_external(socket_id))
> > > > > >
> > > > > > If we are to have a special KNI allocation API, would we even need that?
> > > > >
> > > > > Not need this change in rte_pktmbuf_pool_create () if we introduce
> > > > > a new rte_kni_pktmbuf_pool_create () API.
> > > >
> > > > Ferruh, Olivier, Anatoly,
> > > >
> > > > Any objection to create new rte_kni_pktmbuf_pool_create () API to
> > > > embedded MEMPOOL_F_NO_PAGE_SPLIT flag requirement for KNI + IOVA
> > as
> > > > VA
> > > >
> > > >
> > >
> > > As long as we all are aware of what that means and agree with that
> > > consequence (namely, separate codepaths for KNI and other PMD's) then
> > > i have no specific objections.
> > 
> > Sorry for the late feedback.
> > 
> > I think we can change the default behavior of mempool populate(), to prevent
> > objects from being accross 2 pages, except if the size of the object is bigger than
> > the size of the page. This is already what is done in
> > rte_mempool_op_calc_mem_size_default() when we want to estimate the
> > amount of memory needed to allocate N objects.
> > 
> > This would avoid the introduction of a specific API to allocate packets for kni,
> > and a specific mempool flag.
> > 
> > About the problem of 9K mbuf mentionned by Anatoly, could we imagine a
> > check in kni code, that just returns an error "does not work with
> > size(mbuf) > size(page)" ?
> > 
> 
> Yes, change in default behavior avoids new APIs or flags.
> Two minor changes on top of  above suggestions.
> 1) Can flag(NO_PAGE_SPLIT) be retained.?,  sequence is like,  flag is set by default in rte_mempool_populate_default()
> and later it can be cleared based on obj_per_page in rte_mempool_op_calc_mem_size_default(). I do not see specific
> requirement of these flag apart from handling above sequence.

Sorry, I don't get why you want to keep this flag. Is it to facilitate
the error check in kni code?

The flags are used by the mempool user to ask for a specific behavior,
so if we change the default behavior, there is nothing to change to the
user API.

> 2) For problems of 9k mbuf, I think that check could be addressed in kni lib(in rte_kni_init and return error).

You can use rte_mempool_obj_iter() to iterate the objects (mbufs) in the
mempool, to ensure that none of them is accross 2 pages.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
  2019-07-16  9:55             ` Olivier Matz
@ 2019-07-16 10:07               ` Vamsi Krishna Attunuru
  0 siblings, 0 replies; 9+ messages in thread
From: Vamsi Krishna Attunuru @ 2019-07-16 10:07 UTC (permalink / raw)
  To: Olivier Matz
  Cc: Burakov, Anatoly, Jerin Jacob Kollanukkaran, Ferruh Yigit, dev,
	arybchenko



> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Tuesday, July 16, 2019 3:26 PM
> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>
> Cc: Burakov, Anatoly <anatoly.burakov@intel.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Ferruh Yigit <ferruh.yigit@intel.com>; dev@dpdk.org;
> arybchenko@solarflare.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA support in KNI
> 
> Hi,
> 
> On Tue, Jul 16, 2019 at 09:40:59AM +0000, Vamsi Krishna Attunuru wrote:
> >
> >
> > > -----Original Message-----
> > > From: Olivier Matz <olivier.matz@6wind.com>
> > > Sent: Tuesday, July 16, 2019 2:17 PM
> > > To: Burakov, Anatoly <anatoly.burakov@intel.com>
> > > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> > > <ferruh.yigit@intel.com>; Vamsi Krishna Attunuru
> > > <vattunuru@marvell.com>; dev@dpdk.org; arybchenko@solarflare.com
> > > Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v6 0/4] add IOVA = VA
> > > support in KNI
> > >
> > > Hi,
> > >
> > > On Mon, Jul 15, 2019 at 10:38:53AM +0100, Burakov, Anatoly wrote:
> > > > On 15-Jul-19 5:54 AM, Jerin Jacob Kollanukkaran wrote:
> > > > > > > > > > > (also, i don't really like the name NO_PAGE_BOUND
> > > > > > > > > > > since in memzone API there's a "bounded memzone"
> > > > > > > > > > > allocation API, and this flag's name reads like
> > > > > > > > > > > objects would not be bounded by page size, not that
> > > > > > > > > > > they won't cross page
> > > > > > > > > > > boundary)
> > > > > > > > > >
> > > > > > > > > > No strong opinion for the name. What name you suggest?
> > > > > > > > >
> > > > > > > > > How about something like MEMPOOL_F_NO_PAGE_SPLIT?
> > > > > > > >
> > > > > > > > Looks good to me.
> > > > > > > >
> > > > > > > > In summary, Change wrt existing patch"
> > > > > > > > - Change NO_PAGE_BOUND to MEMPOOL_F_NO_PAGE_SPLIT
> > > > > > > > - Set this flag in  rte_pktmbuf_pool_create () when
> > > > > > > rte_eal_has_hugepages() ||
> > > > > > > >    rte_malloc_heap_socket_is_external(socket_id))
> > > > > > >
> > > > > > > If we are to have a special KNI allocation API, would we even need
> that?
> > > > > >
> > > > > > Not need this change in rte_pktmbuf_pool_create () if we
> > > > > > introduce a new rte_kni_pktmbuf_pool_create () API.
> > > > >
> > > > > Ferruh, Olivier, Anatoly,
> > > > >
> > > > > Any objection to create new rte_kni_pktmbuf_pool_create () API
> > > > > to embedded MEMPOOL_F_NO_PAGE_SPLIT flag requirement for KNI +
> > > > > IOVA
> > > as
> > > > > VA
> > > > >
> > > > >
> > > >
> > > > As long as we all are aware of what that means and agree with that
> > > > consequence (namely, separate codepaths for KNI and other PMD's)
> > > > then i have no specific objections.
> > >
> > > Sorry for the late feedback.
> > >
> > > I think we can change the default behavior of mempool populate(), to
> > > prevent objects from being accross 2 pages, except if the size of
> > > the object is bigger than the size of the page. This is already what
> > > is done in
> > > rte_mempool_op_calc_mem_size_default() when we want to estimate the
> > > amount of memory needed to allocate N objects.
> > >
> > > This would avoid the introduction of a specific API to allocate
> > > packets for kni, and a specific mempool flag.
> > >
> > > About the problem of 9K mbuf mentionned by Anatoly, could we imagine
> > > a check in kni code, that just returns an error "does not work with
> > > size(mbuf) > size(page)" ?
> > >
> >
> > Yes, change in default behavior avoids new APIs or flags.
> > Two minor changes on top of  above suggestions.
> > 1) Can flag(NO_PAGE_SPLIT) be retained.?,  sequence is like,  flag is
> > set by default in rte_mempool_populate_default() and later it can be
> > cleared based on obj_per_page in rte_mempool_op_calc_mem_size_default().
> I do not see specific requirement of these flag apart from handling above
> sequence.
> 
> Sorry, I don't get why you want to keep this flag. Is it to facilitate the error check
> in kni code?
Yes, it's only for error check I thought.

> 
> The flags are used by the mempool user to ask for a specific behavior, so if we
> change the default behavior, there is nothing to change to the user API.

Correct, the flags are meant for mempool users. As you suggested there is no requirement
of new APIs or flags by changing default behavior.

> 
> > 2) For problems of 9k mbuf, I think that check could be addressed in kni lib(in
> rte_kni_init and return error).
> 
> You can use rte_mempool_obj_iter() to iterate the objects (mbufs) in the
> mempool, to ensure that none of them is accross 2 pages.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, back to index

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-12 11:37 [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI Jerin Jacob Kollanukkaran
2019-07-12 12:09 ` Burakov, Anatoly
2019-07-12 12:28   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-07-15  4:54     ` Jerin Jacob Kollanukkaran
2019-07-15  9:38       ` Burakov, Anatoly
2019-07-16  8:46         ` Olivier Matz
2019-07-16  9:40           ` Vamsi Krishna Attunuru
2019-07-16  9:55             ` Olivier Matz
2019-07-16 10:07               ` Vamsi Krishna Attunuru

DPDK-dev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dpdk-dev/0 dpdk-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dpdk-dev dpdk-dev/ https://lore.kernel.org/dpdk-dev \
		dev@dpdk.org
	public-inbox-index dpdk-dev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git