All of lore.kernel.org
 help / color / mirror / Atom feed
* rte_ring features in use (or not)
@ 2017-01-25 12:14 Bruce Richardson
  2017-01-25 12:16 ` Bruce Richardson
                   ` (21 more replies)
  0 siblings, 22 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-01-25 12:14 UTC (permalink / raw)
  To: dev

Hi all,

while looking at the rte_ring code, I'm wondering if we can simplify
that a bit by removing some of the code it in that may not be used.
Specifically:

* Does anyone use the NIC stats functionality for debugging? I've
  certainly never seen it used, and it's presence makes the rest less
  readable. Can it be dropped?

* RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
  so does anyone actually use this? Can it be dropped?

* Who uses the watermarks feature as is? I know we have a sample app
  that uses it, but there are better ways I think to achieve the same
  goal while simplifying the ring implementation. Rather than have a set
  watermark on enqueue, have both enqueue and dequeue functions return
  the number of free or used slots available in the ring (in case of
  enqueue, how many free there are, in case of dequeue, how many items
  are available). Easier to implement and far more useful to the app.

Thoughts?

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
@ 2017-01-25 12:16 ` Bruce Richardson
  2017-01-25 13:20 ` Olivier MATZ
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-01-25 12:16 UTC (permalink / raw)
  To: dev

On Wed, Jan 25, 2017 at 12:14:56PM +0000, Bruce Richardson wrote:
> Hi all,
> 
> while looking at the rte_ring code, I'm wondering if we can simplify
> that a bit by removing some of the code it in that may not be used.
> Specifically:
> 
> * Does anyone use the NIC stats functionality for debugging? I've

By NIC stat, I mean ring stats here! Too much ethernet on the brain! :-(

/Bruce

>   certainly never seen it used, and it's presence makes the rest less
>   readable. Can it be dropped?
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
  2017-01-25 12:16 ` Bruce Richardson
@ 2017-01-25 13:20 ` Olivier MATZ
  2017-01-25 13:54   ` Bruce Richardson
  2017-01-25 16:39   ` Stephen Hemminger
  2017-02-07 14:12 ` [PATCH RFCv3 00/19] ring cleanup and generalization Bruce Richardson
                   ` (19 subsequent siblings)
  21 siblings, 2 replies; 37+ messages in thread
From: Olivier MATZ @ 2017-01-25 13:20 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
<bruce.richardson@intel.com> wrote:
> Hi all,
> 
> while looking at the rte_ring code, I'm wondering if we can simplify
> that a bit by removing some of the code it in that may not be used.
> Specifically:
> 
> * Does anyone use the NIC stats functionality for debugging? I've
>   certainly never seen it used, and it's presence makes the rest less
>   readable. Can it be dropped?

What do you call NIC stats? The stats that are enabled with
RTE_LIBRTE_RING_DEBUG?

If yes, I was recently thinking almost the same about mempool stats. The
need to enable stats at compilation makes them less usable. On the
other hand, I feel the mempool/ring stats may be useful, for instance
to check if mbufs are used from mempool cache, and not from common pool.

For mempool, my conclusion was:
- Enabling stats (debug) changes the ABI, because it adds a field in
  the structure, this is bad
- enabling stats is not the same than enabling debug, we should have 2
  different ifdefs
- if statistics don't cost a lot, they should be enabled by default,
  because it's a good debug tool (ex: have a stats for each access to
  common pool)

For the ring, in my opinion, the stats could be fully removed.


> * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
>   so does anyone actually use this? Can it be dropped?

This option looks like a hack to use the ring in conditions where it
should no be used (preemptable threads). And having a compile-time
option for this kind of stuff is not in vogue ;)


> * Who uses the watermarks feature as is? I know we have a sample app
>   that uses it, but there are better ways I think to achieve the same
>   goal while simplifying the ring implementation. Rather than have a
> set watermark on enqueue, have both enqueue and dequeue functions
> return the number of free or used slots available in the ring (in
> case of enqueue, how many free there are, in case of dequeue, how
> many items are available). Easier to implement and far more useful to
> the app.

+1

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 13:20 ` Olivier MATZ
@ 2017-01-25 13:54   ` Bruce Richardson
  2017-01-25 14:48     ` Bruce Richardson
  2017-01-25 16:39   ` Stephen Hemminger
  1 sibling, 1 reply; 37+ messages in thread
From: Bruce Richardson @ 2017-01-25 13:54 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev

On Wed, Jan 25, 2017 at 02:20:52PM +0100, Olivier MATZ wrote:
> On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > Hi all,
> > 
> > while looking at the rte_ring code, I'm wondering if we can simplify
> > that a bit by removing some of the code it in that may not be used.
> > Specifically:
> > 
> > * Does anyone use the NIC stats functionality for debugging? I've
> >   certainly never seen it used, and it's presence makes the rest less
> >   readable. Can it be dropped?
> 
> What do you call NIC stats? The stats that are enabled with
> RTE_LIBRTE_RING_DEBUG?

Yes. By NIC I meant ring. :-(
> 
> If yes, I was recently thinking almost the same about mempool stats. The
> need to enable stats at compilation makes them less usable. On the
> other hand, I feel the mempool/ring stats may be useful, for instance
> to check if mbufs are used from mempool cache, and not from common pool.
> 
> For mempool, my conclusion was:
> - Enabling stats (debug) changes the ABI, because it adds a field in
>   the structure, this is bad
> - enabling stats is not the same than enabling debug, we should have 2
>   different ifdefs
> - if statistics don't cost a lot, they should be enabled by default,
>   because it's a good debug tool (ex: have a stats for each access to
>   common pool)
> 
> For the ring, in my opinion, the stats could be fully removed.

That is my thinking too. For mempool, I'd wait to see the potential
performance hits before deciding whether or not to enable by default.
Having them run-time enabled may also be an option too - if the branches
get predicted properly, there should be little to no impact as we avoid
all the writes to the stats, which is likely to be where the biggest hit
is.

> 
> 
> > * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
> >   so does anyone actually use this? Can it be dropped?
> 
> This option looks like a hack to use the ring in conditions where it
> should no be used (preemptable threads). And having a compile-time
> option for this kind of stuff is not in vogue ;)

Definitely agree. As well as being a compile time option, I also think
that it's the wrong way to solve the problem. If we want to break out of
a loop like that early, then we should look to do a non-blocking version
of the APIs with a subsequent tail update call. That way an app can
decide per-ring when to sleep or context switch, or can even to do other
work while it waits.
However, I wouldn't be in a rush to implement that without a compelling
use-case.

> 
> 
> > * Who uses the watermarks feature as is? I know we have a sample app
> >   that uses it, but there are better ways I think to achieve the same
> >   goal while simplifying the ring implementation. Rather than have a
> > set watermark on enqueue, have both enqueue and dequeue functions
> > return the number of free or used slots available in the ring (in
> > case of enqueue, how many free there are, in case of dequeue, how
> > many items are available). Easier to implement and far more useful to
> > the app.
> 
> +1
> 
> 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 13:54   ` Bruce Richardson
@ 2017-01-25 14:48     ` Bruce Richardson
  2017-01-25 15:59       ` Wiles, Keith
  0 siblings, 1 reply; 37+ messages in thread
From: Bruce Richardson @ 2017-01-25 14:48 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev

On Wed, Jan 25, 2017 at 01:54:04PM +0000, Bruce Richardson wrote:
> On Wed, Jan 25, 2017 at 02:20:52PM +0100, Olivier MATZ wrote:
> > On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > > Hi all,
> > > 
> > > while looking at the rte_ring code, I'm wondering if we can simplify
> > > that a bit by removing some of the code it in that may not be used.
> > > Specifically:
> > > 
> > > * Does anyone use the NIC stats functionality for debugging? I've
> > >   certainly never seen it used, and it's presence makes the rest less
> > >   readable. Can it be dropped?
> > 
> > What do you call NIC stats? The stats that are enabled with
> > RTE_LIBRTE_RING_DEBUG?
> 
> Yes. By NIC I meant ring. :-(
> > 
<snip>
> > For the ring, in my opinion, the stats could be fully removed.
> 
> That is my thinking too. For mempool, I'd wait to see the potential
> performance hits before deciding whether or not to enable by default.
> Having them run-time enabled may also be an option too - if the branches
> get predicted properly, there should be little to no impact as we avoid
> all the writes to the stats, which is likely to be where the biggest hit
> is.
> 
> > 
> > 
> > > * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
> > >   so does anyone actually use this? Can it be dropped?
> > 
> > This option looks like a hack to use the ring in conditions where it
> > should no be used (preemptable threads). And having a compile-time
> > option for this kind of stuff is not in vogue ;)
> 
<snip>
> > 
> > 
> > > * Who uses the watermarks feature as is? I know we have a sample app
> > >   that uses it, but there are better ways I think to achieve the same
> > >   goal while simplifying the ring implementation. Rather than have a
> > > set watermark on enqueue, have both enqueue and dequeue functions
> > > return the number of free or used slots available in the ring (in
> > > case of enqueue, how many free there are, in case of dequeue, how
> > > many items are available). Easier to implement and far more useful to
> > > the app.
> > 
> > +1
> > 
Bonus question:
* Do we know how widely used the enq_bulk/deq_bulk functions are? They
  are useful for unit tests, so they do have uses, but I think it would
  be good if we harmonized the return values between bulk and burst
  functions. Right now:
    enq_bulk  - only enqueues all elements or none. Returns 0 for all, or
                negative error for none.
    enq_burst - enqueues as many elements as possible. Returns the number
                enqueued.
  I think it would be better if bulk and burst both returned the number
  enqueued, and only differed in the case of the behaviour when not all
  elements could be enqueued.
  
  That would mean an API change for enq_bulk, where it would return only
  0 or N, rather than 0 or negative. While we can map one set of return
  values to another inside the rte_ring library, I'm not sure I see a
  good reason to keep the old behaviour except for backward compatibility.
  Changing it makes it easier to switch between the two functions in
  code, and avoids confusion as to what the return values could be. Is
  it worth doing so? [My opinion is yes!]
  

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 14:48     ` Bruce Richardson
@ 2017-01-25 15:59       ` Wiles, Keith
  2017-01-25 16:57         ` Bruce Richardson
  0 siblings, 1 reply; 37+ messages in thread
From: Wiles, Keith @ 2017-01-25 15:59 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: Olivier MATZ, dev



Sent from my iPhone

> On Jan 25, 2017, at 7:48 AM, Bruce Richardson <bruce.richardson@intel.com> wrote:
> 
>> On Wed, Jan 25, 2017 at 01:54:04PM +0000, Bruce Richardson wrote:
>>> On Wed, Jan 25, 2017 at 02:20:52PM +0100, Olivier MATZ wrote:
>>> On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
>>> <bruce.richardson@intel.com> wrote:
>>>> Hi all,
>>>> 
>>>> while looking at the rte_ring code, I'm wondering if we can simplify
>>>> that a bit by removing some of the code it in that may not be used.
>>>> Specifically:
>>>> 
>>>> * Does anyone use the NIC stats functionality for debugging? I've
>>>>  certainly never seen it used, and it's presence makes the rest less
>>>>  readable. Can it be dropped?
>>> 
>>> What do you call NIC stats? The stats that are enabled with
>>> RTE_LIBRTE_RING_DEBUG?
>> 
>> Yes. By NIC I meant ring. :-(
>>> 
> <snip>
>>> For the ring, in my opinion, the stats could be fully removed.
>> 
>> That is my thinking too. For mempool, I'd wait to see the potential
>> performance hits before deciding whether or not to enable by default.
>> Having them run-time enabled may also be an option too - if the branches
>> get predicted properly, there should be little to no impact as we avoid
>> all the writes to the stats, which is likely to be where the biggest hit
>> is.
>> 
>>> 
>>> 
>>>> * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
>>>>  so does anyone actually use this? Can it be dropped?
>>> 
>>> This option looks like a hack to use the ring in conditions where it
>>> should no be used (preemptable threads). And having a compile-time
>>> option for this kind of stuff is not in vogue ;)
>> 
> <snip>
>>> 
>>> 
>>>> * Who uses the watermarks feature as is? I know we have a sample app
>>>>  that uses it, but there are better ways I think to achieve the same
>>>>  goal while simplifying the ring implementation. Rather than have a
>>>> set watermark on enqueue, have both enqueue and dequeue functions
>>>> return the number of free or used slots available in the ring (in
>>>> case of enqueue, how many free there are, in case of dequeue, how
>>>> many items are available). Easier to implement and far more useful to
>>>> the app.
>>> 
>>> +1
>>> 
> Bonus question:
> * Do we know how widely used the enq_bulk/deq_bulk functions are? They
>  are useful for unit tests, so they do have uses, but I think it would
>  be good if we harmonized the return values between bulk and burst
>  functions. Right now:
>    enq_bulk  - only enqueues all elements or none. Returns 0 for all, or
>                negative error for none.
>    enq_burst - enqueues as many elements as possible. Returns the number
>                enqueued.

I do use the apis in pktgen and the difference in return values has got me once. Making them common would be great,  but the problem is backward compat to old versions I would need to have an ifdef in pktgen now. So it seems like we moved the problem to the application.

I would like to see the old API kept and a new API with the new behavior. I know it adds another API but one of the API would be nothing more than wrapper function if not a macro. 

Would that be more reasonable then changing the ABI?

>  I think it would be better if bulk and burst both returned the number
>  enqueued, and only differed in the case of the behaviour when not all
>  elements could be enqueued.
> 
>  That would mean an API change for enq_bulk, where it would return only
>  0 or N, rather than 0 or negative. While we can map one set of return
>  values to another inside the rte_ring library, I'm not sure I see a
>  good reason to keep the old behaviour except for backward compatibility.
>  Changing it makes it easier to switch between the two functions in
>  code, and avoids confusion as to what the return values could be. Is
>  it worth doing so? [My opinion is yes!]
> 
> 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 13:20 ` Olivier MATZ
  2017-01-25 13:54   ` Bruce Richardson
@ 2017-01-25 16:39   ` Stephen Hemminger
  1 sibling, 0 replies; 37+ messages in thread
From: Stephen Hemminger @ 2017-01-25 16:39 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: Bruce Richardson, dev

On Wed, 25 Jan 2017 14:20:52 +0100
Olivier MATZ <olivier.matz@6wind.com> wrote:

> > * Who uses the watermarks feature as is? I know we have a sample app
> >   that uses it, but there are better ways I think to achieve the same
> >   goal while simplifying the ring implementation. Rather than have a
> > set watermark on enqueue, have both enqueue and dequeue functions
> > return the number of free or used slots available in the ring (in
> > case of enqueue, how many free there are, in case of dequeue, how
> > many items are available). Easier to implement and far more useful to
> > the app.  
> 
> +1


I did use the watermark feature once, it was for case of ring between
two threads. Only wanted to wake up other thread if > N packets were
in ring.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 15:59       ` Wiles, Keith
@ 2017-01-25 16:57         ` Bruce Richardson
  2017-01-25 17:29           ` Ananyev, Konstantin
  2017-01-25 22:27           ` Wiles, Keith
  0 siblings, 2 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-01-25 16:57 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: Olivier MATZ, dev

On Wed, Jan 25, 2017 at 03:59:55PM +0000, Wiles, Keith wrote:
> 
> 
> Sent from my iPhone
> 
> > On Jan 25, 2017, at 7:48 AM, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > 
> >> On Wed, Jan 25, 2017 at 01:54:04PM +0000, Bruce Richardson wrote:
> >>> On Wed, Jan 25, 2017 at 02:20:52PM +0100, Olivier MATZ wrote:
> >>> On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
> >>> <bruce.richardson@intel.com> wrote:
> >>>> Hi all,
> >>>> 
> >>>> while looking at the rte_ring code, I'm wondering if we can simplify
> >>>> that a bit by removing some of the code it in that may not be used.
> >>>> Specifically:
> >>>> 
> >>>> * Does anyone use the NIC stats functionality for debugging? I've
> >>>>  certainly never seen it used, and it's presence makes the rest less
> >>>>  readable. Can it be dropped?
> >>> 
> >>> What do you call NIC stats? The stats that are enabled with
> >>> RTE_LIBRTE_RING_DEBUG?
> >> 
> >> Yes. By NIC I meant ring. :-(
> >>> 
> > <snip>
> >>> For the ring, in my opinion, the stats could be fully removed.
> >> 
> >> That is my thinking too. For mempool, I'd wait to see the potential
> >> performance hits before deciding whether or not to enable by default.
> >> Having them run-time enabled may also be an option too - if the branches
> >> get predicted properly, there should be little to no impact as we avoid
> >> all the writes to the stats, which is likely to be where the biggest hit
> >> is.
> >> 
> >>> 
> >>> 
> >>>> * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
> >>>>  so does anyone actually use this? Can it be dropped?
> >>> 
> >>> This option looks like a hack to use the ring in conditions where it
> >>> should no be used (preemptable threads). And having a compile-time
> >>> option for this kind of stuff is not in vogue ;)
> >> 
> > <snip>
> >>> 
> >>> 
> >>>> * Who uses the watermarks feature as is? I know we have a sample app
> >>>>  that uses it, but there are better ways I think to achieve the same
> >>>>  goal while simplifying the ring implementation. Rather than have a
> >>>> set watermark on enqueue, have both enqueue and dequeue functions
> >>>> return the number of free or used slots available in the ring (in
> >>>> case of enqueue, how many free there are, in case of dequeue, how
> >>>> many items are available). Easier to implement and far more useful to
> >>>> the app.
> >>> 
> >>> +1
> >>> 
> > Bonus question:
> > * Do we know how widely used the enq_bulk/deq_bulk functions are? They
> >  are useful for unit tests, so they do have uses, but I think it would
> >  be good if we harmonized the return values between bulk and burst
> >  functions. Right now:
> >    enq_bulk  - only enqueues all elements or none. Returns 0 for all, or
> >                negative error for none.
> >    enq_burst - enqueues as many elements as possible. Returns the number
> >                enqueued.
> 
> I do use the apis in pktgen and the difference in return values has got me once. Making them common would be great,  but the problem is backward compat to old versions I would need to have an ifdef in pktgen now. So it seems like we moved the problem to the application.
> 

Yes, an ifdef would be needed, but how many versions of DPDK back do you
support? Could the ifdef be removed again after say, 6 months?

> I would like to see the old API kept and a new API with the new behavior. I know it adds another API but one of the API would be nothing more than wrapper function if not a macro. 
> 
> Would that be more reasonable then changing the ABI?

Technically, this would be an API rather than ABI change, since the
functions are inlined in the code. However, it's not the only API change
I'm looking to make here - I'd like to have all the functions start
returning details of the state of the ring, rather than have the
watermarks facility. If we add all new functions for this and keep the
old ones around, we are just increasing our maintenance burden.

I'd like other opinions here. Do we see increasing the API surface as
the best solution, or are we ok to change the APIs of a key library like
the rings one?

/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 16:57         ` Bruce Richardson
@ 2017-01-25 17:29           ` Ananyev, Konstantin
  2017-01-31 10:53             ` Olivier Matz
  2017-01-25 22:27           ` Wiles, Keith
  1 sibling, 1 reply; 37+ messages in thread
From: Ananyev, Konstantin @ 2017-01-25 17:29 UTC (permalink / raw)
  To: Richardson, Bruce, Wiles, Keith; +Cc: Olivier MATZ, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Wednesday, January 25, 2017 4:58 PM
> To: Wiles, Keith <keith.wiles@intel.com>
> Cc: Olivier MATZ <olivier.matz@6wind.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] rte_ring features in use (or not)
> 
> On Wed, Jan 25, 2017 at 03:59:55PM +0000, Wiles, Keith wrote:
> >
> >
> > Sent from my iPhone
> >
> > > On Jan 25, 2017, at 7:48 AM, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > >
> > >> On Wed, Jan 25, 2017 at 01:54:04PM +0000, Bruce Richardson wrote:
> > >>> On Wed, Jan 25, 2017 at 02:20:52PM +0100, Olivier MATZ wrote:
> > >>> On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
> > >>> <bruce.richardson@intel.com> wrote:
> > >>>> Hi all,
> > >>>>
> > >>>> while looking at the rte_ring code, I'm wondering if we can simplify
> > >>>> that a bit by removing some of the code it in that may not be used.
> > >>>> Specifically:
> > >>>>
> > >>>> * Does anyone use the NIC stats functionality for debugging? I've
> > >>>>  certainly never seen it used, and it's presence makes the rest less
> > >>>>  readable. Can it be dropped?
> > >>>
> > >>> What do you call NIC stats? The stats that are enabled with
> > >>> RTE_LIBRTE_RING_DEBUG?
> > >>
> > >> Yes. By NIC I meant ring. :-(
> > >>>
> > > <snip>
> > >>> For the ring, in my opinion, the stats could be fully removed.
> > >>
> > >> That is my thinking too. For mempool, I'd wait to see the potential
> > >> performance hits before deciding whether or not to enable by default.
> > >> Having them run-time enabled may also be an option too - if the branches
> > >> get predicted properly, there should be little to no impact as we avoid
> > >> all the writes to the stats, which is likely to be where the biggest hit
> > >> is.
> > >>
> > >>>
> > >>>
> > >>>> * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
> > >>>>  so does anyone actually use this? Can it be dropped?
> > >>>
> > >>> This option looks like a hack to use the ring in conditions where it
> > >>> should no be used (preemptable threads). And having a compile-time
> > >>> option for this kind of stuff is not in vogue ;)
> > >>
> > > <snip>
> > >>>
> > >>>
> > >>>> * Who uses the watermarks feature as is? I know we have a sample app
> > >>>>  that uses it, but there are better ways I think to achieve the same
> > >>>>  goal while simplifying the ring implementation. Rather than have a
> > >>>> set watermark on enqueue, have both enqueue and dequeue functions
> > >>>> return the number of free or used slots available in the ring (in
> > >>>> case of enqueue, how many free there are, in case of dequeue, how
> > >>>> many items are available). Easier to implement and far more useful to
> > >>>> the app.
> > >>>
> > >>> +1
> > >>>
> > > Bonus question:
> > > * Do we know how widely used the enq_bulk/deq_bulk functions are? They
> > >  are useful for unit tests, so they do have uses, but I think it would
> > >  be good if we harmonized the return values between bulk and burst
> > >  functions. Right now:
> > >    enq_bulk  - only enqueues all elements or none. Returns 0 for all, or
> > >                negative error for none.
> > >    enq_burst - enqueues as many elements as possible. Returns the number
> > >                enqueued.
> >
> > I do use the apis in pktgen and the difference in return values has got me once. Making them common would be great,  but the problem is
> backward compat to old versions I would need to have an ifdef in pktgen now. So it seems like we moved the problem to the application.
> >
> 
> Yes, an ifdef would be needed, but how many versions of DPDK back do you
> support? Could the ifdef be removed again after say, 6 months?
> 
> > I would like to see the old API kept and a new API with the new behavior. I know it adds another API but one of the API would be nothing
> more than wrapper function if not a macro.
> >
> > Would that be more reasonable then changing the ABI?
> 
> Technically, this would be an API rather than ABI change, since the
> functions are inlined in the code. However, it's not the only API change
> I'm looking to make here - I'd like to have all the functions start
> returning details of the state of the ring, rather than have the
> watermarks facility. If we add all new functions for this and keep the
> old ones around, we are just increasing our maintenance burden.
> 
> I'd like other opinions here. Do we see increasing the API surface as
> the best solution, or are we ok to change the APIs of a key library like
> the rings one?

I am ok with changing API to make both _bulk and _burst return the same thing.
Konstantin 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 16:57         ` Bruce Richardson
  2017-01-25 17:29           ` Ananyev, Konstantin
@ 2017-01-25 22:27           ` Wiles, Keith
  1 sibling, 0 replies; 37+ messages in thread
From: Wiles, Keith @ 2017-01-25 22:27 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: Olivier MATZ, dev


> On Jan 25, 2017, at 9:57 AM, Richardson, Bruce <bruce.richardson@intel.com> wrote:
> 
> On Wed, Jan 25, 2017 at 03:59:55PM +0000, Wiles, Keith wrote:
>> 
>> 
>> Sent from my iPhone
>> 
>>> On Jan 25, 2017, at 7:48 AM, Bruce Richardson <bruce.richardson@intel.com> wrote:
>>> 
>>>> On Wed, Jan 25, 2017 at 01:54:04PM +0000, Bruce Richardson wrote:
>>>>> On Wed, Jan 25, 2017 at 02:20:52PM +0100, Olivier MATZ wrote:
>>>>> On Wed, 25 Jan 2017 12:14:56 +0000, Bruce Richardson
>>>>> <bruce.richardson@intel.com> wrote:
>>>>>> Hi all,
>>>>>> 
>>>>>> while looking at the rte_ring code, I'm wondering if we can simplify
>>>>>> that a bit by removing some of the code it in that may not be used.
>>>>>> Specifically:
>>>>>> 
>>>>>> * Does anyone use the NIC stats functionality for debugging? I've
>>>>>> certainly never seen it used, and it's presence makes the rest less
>>>>>> readable. Can it be dropped?
>>>>> 
>>>>> What do you call NIC stats? The stats that are enabled with
>>>>> RTE_LIBRTE_RING_DEBUG?
>>>> 
>>>> Yes. By NIC I meant ring. :-(
>>>>> 
>>> <snip>
>>>>> For the ring, in my opinion, the stats could be fully removed.
>>>> 
>>>> That is my thinking too. For mempool, I'd wait to see the potential
>>>> performance hits before deciding whether or not to enable by default.
>>>> Having them run-time enabled may also be an option too - if the branches
>>>> get predicted properly, there should be little to no impact as we avoid
>>>> all the writes to the stats, which is likely to be where the biggest hit
>>>> is.
>>>> 
>>>>> 
>>>>> 
>>>>>> * RTE_RING_PAUSE_REP_COUNT is set to be disabled at build time, and
>>>>>> so does anyone actually use this? Can it be dropped?
>>>>> 
>>>>> This option looks like a hack to use the ring in conditions where it
>>>>> should no be used (preemptable threads). And having a compile-time
>>>>> option for this kind of stuff is not in vogue ;)
>>>> 
>>> <snip>
>>>>> 
>>>>> 
>>>>>> * Who uses the watermarks feature as is? I know we have a sample app
>>>>>> that uses it, but there are better ways I think to achieve the same
>>>>>> goal while simplifying the ring implementation. Rather than have a
>>>>>> set watermark on enqueue, have both enqueue and dequeue functions
>>>>>> return the number of free or used slots available in the ring (in
>>>>>> case of enqueue, how many free there are, in case of dequeue, how
>>>>>> many items are available). Easier to implement and far more useful to
>>>>>> the app.
>>>>> 
>>>>> +1
>>>>> 
>>> Bonus question:
>>> * Do we know how widely used the enq_bulk/deq_bulk functions are? They
>>> are useful for unit tests, so they do have uses, but I think it would
>>> be good if we harmonized the return values between bulk and burst
>>> functions. Right now:
>>>   enq_bulk  - only enqueues all elements or none. Returns 0 for all, or
>>>               negative error for none.
>>>   enq_burst - enqueues as many elements as possible. Returns the number
>>>               enqueued.
>> 
>> I do use the apis in pktgen and the difference in return values has got me once. Making them common would be great,  but the problem is backward compat to old versions I would need to have an ifdef in pktgen now. So it seems like we moved the problem to the application.
>> 
> 
> Yes, an ifdef would be needed, but how many versions of DPDK back do you
> support? Could the ifdef be removed again after say, 6 months?

I have people trying to run 2.1 and 2.2 versions of Pktgen. I can cut them off, but I would prefer not to.
> 
>> I would like to see the old API kept and a new API with the new behavior. I know it adds another API but one of the API would be nothing more than wrapper function if not a macro. 
>> 
>> Would that be more reasonable then changing the ABI?
> 
> Technically, this would be an API rather than ABI change, since the
> functions are inlined in the code. However, it's not the only API change
> I'm looking to make here - I'd like to have all the functions start
> returning details of the state of the ring, rather than have the
> watermarks facility. If we add all new functions for this and keep the
> old ones around, we are just increasing our maintenance burden.
> 
> I'd like other opinions here. Do we see increasing the API surface as
> the best solution, or are we ok to change the APIs of a key library like
> the rings one?
> 
> /Bruce

Regards,
Keith

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-25 17:29           ` Ananyev, Konstantin
@ 2017-01-31 10:53             ` Olivier Matz
  2017-01-31 11:41               ` Bruce Richardson
  0 siblings, 1 reply; 37+ messages in thread
From: Olivier Matz @ 2017-01-31 10:53 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Richardson, Bruce, Wiles, Keith, dev

On Wed, 25 Jan 2017 17:29:18 +0000, "Ananyev, Konstantin"
<konstantin.ananyev@intel.com> wrote:
> > > > Bonus question:
> > > > * Do we know how widely used the enq_bulk/deq_bulk functions
> > > > are? They are useful for unit tests, so they do have uses, but
> > > > I think it would be good if we harmonized the return values
> > > > between bulk and burst functions. Right now:
> > > >    enq_bulk  - only enqueues all elements or none. Returns 0
> > > > for all, or negative error for none.
> > > >    enq_burst - enqueues as many elements as possible. Returns
> > > > the number enqueued.  
> > >
> > > I do use the apis in pktgen and the difference in return values
> > > has got me once. Making them common would be great,  but the
> > > problem is  
> > backward compat to old versions I would need to have an ifdef in
> > pktgen now. So it seems like we moved the problem to the
> > application.  
> > >  
> > 
> > Yes, an ifdef would be needed, but how many versions of DPDK back
> > do you support? Could the ifdef be removed again after say, 6
> > months? 
> > > I would like to see the old API kept and a new API with the new
> > > behavior. I know it adds another API but one of the API would be
> > > nothing  
> > more than wrapper function if not a macro.  
> > >
> > > Would that be more reasonable then changing the ABI?  
> > 
> > Technically, this would be an API rather than ABI change, since the
> > functions are inlined in the code. However, it's not the only API
> > change I'm looking to make here - I'd like to have all the
> > functions start returning details of the state of the ring, rather
> > than have the watermarks facility. If we add all new functions for
> > this and keep the old ones around, we are just increasing our
> > maintenance burden.
> > 
> > I'd like other opinions here. Do we see increasing the API surface
> > as the best solution, or are we ok to change the APIs of a key
> > library like the rings one?  
> 
> I am ok with changing API to make both _bulk and _burst return the
> same thing. Konstantin 

I agree that the _bulk() functions returning 0 or -err can be confusing.
But it has at least one advantage: it explicitly shows that if user ask
for N enqueues/dequeues, it will either get N or 0, not something
between.

Changing the API of the existing _bulk() functions looks a bit
dangerous to me. There's probably a lot of code relying on the ring
API, and changing its behavior may break it.

I'd prefer to deprecate the old _bulk and _burst functions, and
introduce a new api, maybe something like:

  rte_ring_generic_dequeue(ring, objs, n, behavior, flags)
  -> return nb_objs or -err


Olivier

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-31 10:53             ` Olivier Matz
@ 2017-01-31 11:41               ` Bruce Richardson
  2017-01-31 12:10                 ` Bruce Richardson
  0 siblings, 1 reply; 37+ messages in thread
From: Bruce Richardson @ 2017-01-31 11:41 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Ananyev, Konstantin, Wiles, Keith, dev

On Tue, Jan 31, 2017 at 11:53:49AM +0100, Olivier Matz wrote:
> On Wed, 25 Jan 2017 17:29:18 +0000, "Ananyev, Konstantin"
> <konstantin.ananyev@intel.com> wrote:
> > > > > Bonus question:
> > > > > * Do we know how widely used the enq_bulk/deq_bulk functions
> > > > > are? They are useful for unit tests, so they do have uses, but
> > > > > I think it would be good if we harmonized the return values
> > > > > between bulk and burst functions. Right now:
> > > > >    enq_bulk  - only enqueues all elements or none. Returns 0
> > > > > for all, or negative error for none.
> > > > >    enq_burst - enqueues as many elements as possible. Returns
> > > > > the number enqueued.  
> > > >
> > > > I do use the apis in pktgen and the difference in return values
> > > > has got me once. Making them common would be great,  but the
> > > > problem is  
> > > backward compat to old versions I would need to have an ifdef in
> > > pktgen now. So it seems like we moved the problem to the
> > > application.  
> > > >  
> > > 
> > > Yes, an ifdef would be needed, but how many versions of DPDK back
> > > do you support? Could the ifdef be removed again after say, 6
> > > months? 
> > > > I would like to see the old API kept and a new API with the new
> > > > behavior. I know it adds another API but one of the API would be
> > > > nothing  
> > > more than wrapper function if not a macro.  
> > > >
> > > > Would that be more reasonable then changing the ABI?  
> > > 
> > > Technically, this would be an API rather than ABI change, since the
> > > functions are inlined in the code. However, it's not the only API
> > > change I'm looking to make here - I'd like to have all the
> > > functions start returning details of the state of the ring, rather
> > > than have the watermarks facility. If we add all new functions for
> > > this and keep the old ones around, we are just increasing our
> > > maintenance burden.
> > > 
> > > I'd like other opinions here. Do we see increasing the API surface
> > > as the best solution, or are we ok to change the APIs of a key
> > > library like the rings one?  
> > 
> > I am ok with changing API to make both _bulk and _burst return the
> > same thing. Konstantin 
> 
> I agree that the _bulk() functions returning 0 or -err can be confusing.
> But it has at least one advantage: it explicitly shows that if user ask
> for N enqueues/dequeues, it will either get N or 0, not something
> between.
> 
> Changing the API of the existing _bulk() functions looks a bit
> dangerous to me. There's probably a lot of code relying on the ring
> API, and changing its behavior may break it.
> 
> I'd prefer to deprecate the old _bulk and _burst functions, and
> introduce a new api, maybe something like:
> 
>   rte_ring_generic_dequeue(ring, objs, n, behavior, flags)
>   -> return nb_objs or -err
> 
Don't like the -err, since it's not a valid value that can be used e.g.
in simple loops in the case that the user doesn't care about the exact
reason for error. I prefer having zero returned on error, with rte_errno
set appropriately, since then it is trivial for apps to ignore error
values they don't care about.
It also makes the APIs in a ring library consistent in that all will set
rte_errno on error, rather than returning the error code. It's not right
for rte_ring_create and rte_ring_lookup to return an error code since
they return pointers, not integer values.

As for deprecating the functions - I'm not sure about that. I think the
names of the existing functions are ok, and should be kept. I've a new
patchset of cleanups for rte_rings in the works. Let me try and finish
that and send it out as an RFC and we'll see what you think then.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-31 11:41               ` Bruce Richardson
@ 2017-01-31 12:10                 ` Bruce Richardson
  2017-01-31 13:27                   ` Olivier Matz
  0 siblings, 1 reply; 37+ messages in thread
From: Bruce Richardson @ 2017-01-31 12:10 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Ananyev, Konstantin, Wiles, Keith, dev

On Tue, Jan 31, 2017 at 11:41:42AM +0000, Bruce Richardson wrote:
> On Tue, Jan 31, 2017 at 11:53:49AM +0100, Olivier Matz wrote:
> > On Wed, 25 Jan 2017 17:29:18 +0000, "Ananyev, Konstantin"
> > <konstantin.ananyev@intel.com> wrote:
> > > > > > Bonus question:
> > > > > > * Do we know how widely used the enq_bulk/deq_bulk functions
> > > > > > are? They are useful for unit tests, so they do have uses, but
> > > > > > I think it would be good if we harmonized the return values
> > > > > > between bulk and burst functions. Right now:
> > > > > >    enq_bulk  - only enqueues all elements or none. Returns 0
> > > > > > for all, or negative error for none.
> > > > > >    enq_burst - enqueues as many elements as possible. Returns
> > > > > > the number enqueued.  
> > > > >
> > > > > I do use the apis in pktgen and the difference in return values
> > > > > has got me once. Making them common would be great,  but the
> > > > > problem is  
> > > > backward compat to old versions I would need to have an ifdef in
> > > > pktgen now. So it seems like we moved the problem to the
> > > > application.  
> > > > >  
> > > > 
> > > > Yes, an ifdef would be needed, but how many versions of DPDK back
> > > > do you support? Could the ifdef be removed again after say, 6
> > > > months? 
> > > > > I would like to see the old API kept and a new API with the new
> > > > > behavior. I know it adds another API but one of the API would be
> > > > > nothing  
> > > > more than wrapper function if not a macro.  
> > > > >
> > > > > Would that be more reasonable then changing the ABI?  
> > > > 
> > > > Technically, this would be an API rather than ABI change, since the
> > > > functions are inlined in the code. However, it's not the only API
> > > > change I'm looking to make here - I'd like to have all the
> > > > functions start returning details of the state of the ring, rather
> > > > than have the watermarks facility. If we add all new functions for
> > > > this and keep the old ones around, we are just increasing our
> > > > maintenance burden.
> > > > 
> > > > I'd like other opinions here. Do we see increasing the API surface
> > > > as the best solution, or are we ok to change the APIs of a key
> > > > library like the rings one?  
> > > 
> > > I am ok with changing API to make both _bulk and _burst return the
> > > same thing. Konstantin 
> > 
> > I agree that the _bulk() functions returning 0 or -err can be confusing.
> > But it has at least one advantage: it explicitly shows that if user ask
> > for N enqueues/dequeues, it will either get N or 0, not something
> > between.
> > 
> > Changing the API of the existing _bulk() functions looks a bit
> > dangerous to me. There's probably a lot of code relying on the ring
> > API, and changing its behavior may break it.
> > 
> > I'd prefer to deprecate the old _bulk and _burst functions, and
> > introduce a new api, maybe something like:
> > 
> >   rte_ring_generic_dequeue(ring, objs, n, behavior, flags)
> >   -> return nb_objs or -err
> > 
> Don't like the -err, since it's not a valid value that can be used e.g.
> in simple loops in the case that the user doesn't care about the exact
> reason for error. I prefer having zero returned on error, with rte_errno
> set appropriately, since then it is trivial for apps to ignore error
> values they don't care about.
> It also makes the APIs in a ring library consistent in that all will set
> rte_errno on error, rather than returning the error code. It's not right
> for rte_ring_create and rte_ring_lookup to return an error code since
> they return pointers, not integer values.
> 
> As for deprecating the functions - I'm not sure about that. I think the
> names of the existing functions are ok, and should be kept. I've a new
> patchset of cleanups for rte_rings in the works. Let me try and finish
> that and send it out as an RFC and we'll see what you think then.
> 
Sorry, I realised on re-reading this reply seemed overly negative,
sorry. I can actually see the case for deprecating both sets of
functions to allow us to "start afresh". If we do so, are we as well to
just replace the whole library with a new one, e.g. rte_fifo, which
would allow us the freedom to keep e.g. functions with "burst" in the
name if we so wish? If might also allow an easier transition.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-31 12:10                 ` Bruce Richardson
@ 2017-01-31 13:27                   ` Olivier Matz
  2017-01-31 13:46                     ` Bruce Richardson
  0 siblings, 1 reply; 37+ messages in thread
From: Olivier Matz @ 2017-01-31 13:27 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: Ananyev, Konstantin, Wiles, Keith, dev

On Tue, 31 Jan 2017 12:10:50 +0000, Bruce Richardson
<bruce.richardson@intel.com> wrote:
> On Tue, Jan 31, 2017 at 11:41:42AM +0000, Bruce Richardson wrote:
> > On Tue, Jan 31, 2017 at 11:53:49AM +0100, Olivier Matz wrote:  
> > > On Wed, 25 Jan 2017 17:29:18 +0000, "Ananyev, Konstantin"
> > > <konstantin.ananyev@intel.com> wrote:  
> > > > > > > Bonus question:
> > > > > > > * Do we know how widely used the enq_bulk/deq_bulk
> > > > > > > functions are? They are useful for unit tests, so they do
> > > > > > > have uses, but I think it would be good if we harmonized
> > > > > > > the return values between bulk and burst functions. Right
> > > > > > > now: enq_bulk  - only enqueues all elements or none.
> > > > > > > Returns 0 for all, or negative error for none.
> > > > > > >    enq_burst - enqueues as many elements as possible.
> > > > > > > Returns the number enqueued.    
> > > > > >
> > > > > > I do use the apis in pktgen and the difference in return
> > > > > > values has got me once. Making them common would be great,
> > > > > > but the problem is    
> > > > > backward compat to old versions I would need to have an ifdef
> > > > > in pktgen now. So it seems like we moved the problem to the
> > > > > application.    
> > > > > >    
> > > > > 
> > > > > Yes, an ifdef would be needed, but how many versions of DPDK
> > > > > back do you support? Could the ifdef be removed again after
> > > > > say, 6 months?   
> > > > > > I would like to see the old API kept and a new API with the
> > > > > > new behavior. I know it adds another API but one of the API
> > > > > > would be nothing    
> > > > > more than wrapper function if not a macro.    
> > > > > >
> > > > > > Would that be more reasonable then changing the ABI?    
> > > > > 
> > > > > Technically, this would be an API rather than ABI change,
> > > > > since the functions are inlined in the code. However, it's
> > > > > not the only API change I'm looking to make here - I'd like
> > > > > to have all the functions start returning details of the
> > > > > state of the ring, rather than have the watermarks facility.
> > > > > If we add all new functions for this and keep the old ones
> > > > > around, we are just increasing our maintenance burden.
> > > > > 
> > > > > I'd like other opinions here. Do we see increasing the API
> > > > > surface as the best solution, or are we ok to change the APIs
> > > > > of a key library like the rings one?    
> > > > 
> > > > I am ok with changing API to make both _bulk and _burst return
> > > > the same thing. Konstantin   
> > > 
> > > I agree that the _bulk() functions returning 0 or -err can be
> > > confusing. But it has at least one advantage: it explicitly shows
> > > that if user ask for N enqueues/dequeues, it will either get N or
> > > 0, not something between.
> > > 
> > > Changing the API of the existing _bulk() functions looks a bit
> > > dangerous to me. There's probably a lot of code relying on the
> > > ring API, and changing its behavior may break it.
> > > 
> > > I'd prefer to deprecate the old _bulk and _burst functions, and
> > > introduce a new api, maybe something like:
> > > 
> > >   rte_ring_generic_dequeue(ring, objs, n, behavior, flags)  
> > >   -> return nb_objs or -err  
> > >   
> > Don't like the -err, since it's not a valid value that can be used
> > e.g. in simple loops in the case that the user doesn't care about
> > the exact reason for error. I prefer having zero returned on error,
> > with rte_errno set appropriately, since then it is trivial for apps
> > to ignore error values they don't care about.
> > It also makes the APIs in a ring library consistent in that all
> > will set rte_errno on error, rather than returning the error code.
> > It's not right for rte_ring_create and rte_ring_lookup to return an
> > error code since they return pointers, not integer values.

My assumption was that functions returning an int should return an
error instead of rte_errno. By the way, it's actually the same debate
than http://dpdk.org/ml/archives/dev/2017-January/056546.html

In that particular case, I'm not convinced that this code:

	ret = ring_dequeue(r, objs, n);
	if (ret == 0) {
		/* handle error in rte_errno */
		return;
	}
	do_stuff_with_elements(objs, ret);

Is better/faster/clearer than this one:

	ret = ring_dequeue(r, objs, n);
	if (ret <= 0) {
		/* handle error in ret */
		return;
	}
	do_stuff_with_elements(objs, ret);


In the first case, you could argue that the "if (ret)" part could be
stripped if the app does not care about errors, but I think it's not
efficient to call the next function with 0 object. Also, this if() does
not necessarily adds a test since ring_dequeue() is inline.

In the first case, ring_dequeue needs to write rte_errno in memory on
error (because it's a global variable), even if the caller does not
look at it. In the second case, it can stay in a register.


> > 
> > As for deprecating the functions - I'm not sure about that. I think
> > the names of the existing functions are ok, and should be kept.
> > I've a new patchset of cleanups for rte_rings in the works. Let me
> > try and finish that and send it out as an RFC and we'll see what
> > you think then. 
> Sorry, I realised on re-reading this reply seemed overly negative,
> sorry.

haha, no problem :)


> I can actually see the case for deprecating both sets of
> functions to allow us to "start afresh". If we do so, are we as well
> to just replace the whole library with a new one, e.g. rte_fifo, which
> would allow us the freedom to keep e.g. functions with "burst" in the
> name if we so wish? If might also allow an easier transition.

Yes, that's also an option.

My fear is about changing the API of such widely used functions,
without triggering any compilation error because the prototypes stays
the same.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: rte_ring features in use (or not)
  2017-01-31 13:27                   ` Olivier Matz
@ 2017-01-31 13:46                     ` Bruce Richardson
  0 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-01-31 13:46 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Ananyev, Konstantin, Wiles, Keith, dev

On Tue, Jan 31, 2017 at 02:27:18PM +0100, Olivier Matz wrote:
> On Tue, 31 Jan 2017 12:10:50 +0000, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > On Tue, Jan 31, 2017 at 11:41:42AM +0000, Bruce Richardson wrote:
> > > On Tue, Jan 31, 2017 at 11:53:49AM +0100, Olivier Matz wrote:  
> > > > On Wed, 25 Jan 2017 17:29:18 +0000, "Ananyev, Konstantin"
> > > > <konstantin.ananyev@intel.com> wrote:  
> > > > > > > > Bonus question:
> > > > > > > > * Do we know how widely used the enq_bulk/deq_bulk
> > > > > > > > functions are? They are useful for unit tests, so they do
> > > > > > > > have uses, but I think it would be good if we harmonized
> > > > > > > > the return values between bulk and burst functions. Right
> > > > > > > > now: enq_bulk  - only enqueues all elements or none.
> > > > > > > > Returns 0 for all, or negative error for none.
> > > > > > > >    enq_burst - enqueues as many elements as possible.
> > > > > > > > Returns the number enqueued.    
> > > > > > >
> > > > > > > I do use the apis in pktgen and the difference in return
> > > > > > > values has got me once. Making them common would be great,
> > > > > > > but the problem is    
> > > > > > backward compat to old versions I would need to have an ifdef
> > > > > > in pktgen now. So it seems like we moved the problem to the
> > > > > > application.    
> > > > > > >    
> > > > > > 
> > > > > > Yes, an ifdef would be needed, but how many versions of DPDK
> > > > > > back do you support? Could the ifdef be removed again after
> > > > > > say, 6 months?   
> > > > > > > I would like to see the old API kept and a new API with the
> > > > > > > new behavior. I know it adds another API but one of the API
> > > > > > > would be nothing    
> > > > > > more than wrapper function if not a macro.    
> > > > > > >
> > > > > > > Would that be more reasonable then changing the ABI?    
> > > > > > 
> > > > > > Technically, this would be an API rather than ABI change,
> > > > > > since the functions are inlined in the code. However, it's
> > > > > > not the only API change I'm looking to make here - I'd like
> > > > > > to have all the functions start returning details of the
> > > > > > state of the ring, rather than have the watermarks facility.
> > > > > > If we add all new functions for this and keep the old ones
> > > > > > around, we are just increasing our maintenance burden.
> > > > > > 
> > > > > > I'd like other opinions here. Do we see increasing the API
> > > > > > surface as the best solution, or are we ok to change the APIs
> > > > > > of a key library like the rings one?    
> > > > > 
> > > > > I am ok with changing API to make both _bulk and _burst return
> > > > > the same thing. Konstantin   
> > > > 
> > > > I agree that the _bulk() functions returning 0 or -err can be
> > > > confusing. But it has at least one advantage: it explicitly shows
> > > > that if user ask for N enqueues/dequeues, it will either get N or
> > > > 0, not something between.
> > > > 
> > > > Changing the API of the existing _bulk() functions looks a bit
> > > > dangerous to me. There's probably a lot of code relying on the
> > > > ring API, and changing its behavior may break it.
> > > > 
> > > > I'd prefer to deprecate the old _bulk and _burst functions, and
> > > > introduce a new api, maybe something like:
> > > > 
> > > >   rte_ring_generic_dequeue(ring, objs, n, behavior, flags)  
> > > >   -> return nb_objs or -err  
> > > >   
> > > Don't like the -err, since it's not a valid value that can be used
> > > e.g. in simple loops in the case that the user doesn't care about
> > > the exact reason for error. I prefer having zero returned on error,
> > > with rte_errno set appropriately, since then it is trivial for apps
> > > to ignore error values they don't care about.
> > > It also makes the APIs in a ring library consistent in that all
> > > will set rte_errno on error, rather than returning the error code.
> > > It's not right for rte_ring_create and rte_ring_lookup to return an
> > > error code since they return pointers, not integer values.
> 
> My assumption was that functions returning an int should return an
> error instead of rte_errno. By the way, it's actually the same debate
> than http://dpdk.org/ml/archives/dev/2017-January/056546.html
> 
> In that particular case, I'm not convinced that this code:
> 
> 	ret = ring_dequeue(r, objs, n);
> 	if (ret == 0) {
> 		/* handle error in rte_errno */
> 		return;
> 	}
> 	do_stuff_with_elements(objs, ret);
> 
> Is better/faster/clearer than this one:
> 
> 	ret = ring_dequeue(r, objs, n);
> 	if (ret <= 0) {
> 		/* handle error in ret */
> 		return;
> 	}
> 	do_stuff_with_elements(objs, ret);
> 
> 
> In the first case, you could argue that the "if (ret)" part could be
> stripped if the app does not care about errors, but I think it's not
> efficient to call the next function with 0 object. Also, this if() does
> not necessarily adds a test since ring_dequeue() is inline.
> 
> In the first case, ring_dequeue needs to write rte_errno in memory on
> error (because it's a global variable), even if the caller does not
> look at it. In the second case, it can stay in a register.
> 

I agree in many cases there is not a lot to choose between the two
methods.

However, I prefer the errno approach for 3 reasons:

1. Firstly, and primarily, it works in all cases, including for use with
  functions that return pointers. That allows a library like rte_ring to
  use it across all functions, rather than having some functions use an
  errno variable, or extra return value, and other functions return the
  error code directly.
2. It's how unix system calls work, so everyone is familiar with the
  scheme.
3. It allows the return value to be always in the valid domain of return
  values for the type. You can have dequeue functions that always return
  unsigned values, you can have functions that return enums etc. This
  means you can track stats and chain function calls without having to
  do error checking if you like: for example, moving packets between
  rings:
  	rte_ring_enqueue_burst(r2, objs, rte_ring_dequeue_burst(r1, objs, sizeof(objs));
  This is for me the least compelling of the reasons, but is still worth
  considering, and I do admit to liking the functional style of
  programming it allows.

> 
> > > 
> > > As for deprecating the functions - I'm not sure about that. I think
> > > the names of the existing functions are ok, and should be kept.
> > > I've a new patchset of cleanups for rte_rings in the works. Let me
> > > try and finish that and send it out as an RFC and we'll see what
> > > you think then. 
> > Sorry, I realised on re-reading this reply seemed overly negative,
> > sorry.
> 
> haha, no problem :)
> 
> 
> > I can actually see the case for deprecating both sets of
> > functions to allow us to "start afresh". If we do so, are we as well
> > to just replace the whole library with a new one, e.g. rte_fifo, which
> > would allow us the freedom to keep e.g. functions with "burst" in the
> > name if we so wish? If might also allow an easier transition.
> 
> Yes, that's also an option.
> 
> My fear is about changing the API of such widely used functions,
> without triggering any compilation error because the prototypes stays
> the same.
>
Don't worry, I also plan to change the prototypes! I want to add in an
extra parameter to each call to optionally return the amount of space in
the ring. This is can lead to large cycle savings - by avoiding extra
calls to rte_ring_count() or rte_ring_free_count() - in cases where it
is useful. We found that this lead to significant performance
improvements in the SW eventdev, as we had less ping-ponging of
cachelines between cores. Since we already know the amount of free space
in an enqueue call we can return that for free, while calling a separate
free_space API can lead to huge stalls if the cachelines have been
adjusted by another core in the meantime. [Yes, this does mean that the
value returned from an enqueue call would be inaccurate, but it can
still, in the case of SP rings, provide a guarantee that any number of
objects up to that number can be enqueued without error, since any
changes will only increase the number that can be enqueued]

The new APIs would therefore be:

	rte_ring_enqueue_bulk(r, objs, n, &free_space);

	rte_ring_dequeue_bulk(r, objs, n, &avail_objs);

This also would allow us to remove the watermarks feature, as the app
can itself check the free_space value against as many watermark
thresholds as it likes.
Hopefully I'll get an RFC with this change ready in the next few days
and we can base the discussion more on working code.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 00/19] ring cleanup and generalization
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
  2017-01-25 12:16 ` Bruce Richardson
  2017-01-25 13:20 ` Olivier MATZ
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-14  8:32   ` Olivier Matz
  2017-02-07 14:12 ` [PATCH RFCv3 01/19] app/pdump: fix duplicate macro definition Bruce Richardson
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

This patchset make a set of, sometimes non-backward compatible, cleanup
changes to the rte_ring code in order to improve it. The resulting code is
shorter*, since the existing functions are restructured to reduce code
duplication, as well as being more consistent in behaviour. The specific
changes made are explained in each patch which makes that change.

Key incompatibilities:
* The biggest, and probably most controversial change is that to the
  enqueue and dequeue APIs. The enqueue/deq burst and bulk functions have
  their function prototypes changed so that they all return an additional
  parameter, indicating the size of next call which is guaranteed to
  succeed. In case on enq, this is the number of available slots on the
  ring, and in case of deq, it is the number of objects which can be
  pulled. As well as this, the return value from the bulk functions have
  been changed to make them compatible with the burst functions. In all
  cases, the functions to enq/deq a set of objs now return the number of
  objects processed, 0 or N, in the case of bulk functions, 0, N or any
  value in between in the case of the burst ones. [Due to the extra
  parameter, the compiler will flag all instances of the function to
  allow the user to also change the return value logic at the same time]
* The parameters to the single object enq/deq functions have not been 
  changed. Because of that, the return value is also unmodified - as the
  compiler cannot automatically flag this to the user.

Potential further cleanups:
* To a certain extent the rte_ring structure has gone from being a whole
  ring structure, including a "ring" element itself, to just being a
  header which can be reused, along with the head/tail update functions
  to create new rings. For now, the enqueue code works by assuming
  that the ring data goes immediately after the header, but that can
  be changed to allow specialised ring implementations to put additional
  metadata of their own after the ring header. I didn't see this as being
  needed right now, but it may be worth considering for a V1 patchset.
* There are 9 enqueue functions and 9 dequeue functions in rte_ring.h. I
  suspect not all of those are used, so personally I would consider
  dropping the functions to enqueue/dequeue a single value using single
  or multi semantics, i.e. drop 
    rte_ring_sp_enqueue
    rte_ring_mp_enqueue
    rte_ring_sc_dequeue
    rte_ring_mc_dequeue
  That would still leave a single enqueue and dequeue function for working
  with a single object at a time.
* It should be possible to merge the head update code for enqueue and
  dequeue into a single function. The key difference between the two is
  the calculation of how far the index can be moved. I felt that the
  functions for moving the head index are sufficiently complicated with
  many parameters to them already, that trying to merge in more code would
  impede readability. However, if so desired this change can be made at a
  later stage without affecting ABI or API.

PERFORMANCE:
I've run performance autotests on a couple of (Intel) platforms. Looking
particularly at the core-2-core results, which I expect are the main ones
of interest, the performance after this patchset is a few cycles per packet
faster in my testing. I'm hoping it should be at least neutral perf-wise.

REQUEST FOR FEEDBACK:
* Are all of these changes worth making?
* Should they be made in existing ring code, or do we look to provide a 
  new fifo library to completely replace the ring one?
* How does the implementation of new ring types using this code compare vs
  that of the previous RFCs?

[*] LOC original rte_ring.h: 462. After patchset: 363. [Numbers generated
using David A. Wheeler's 'SLOCCount'.]

Bruce Richardson (19):
  app/pdump: fix duplicate macro definition
  ring: remove split cacheline build setting
  ring: create common structure for prod and cons metadata
  ring: add a function to return the ring size
  crypto/null: use ring size function
  ring: eliminate duplication of size and mask fields
  ring: remove debug setting
  ring: remove the yield when waiting for tail update
  ring: remove watermark support
  ring: make bulk and burst fn return vals consistent
  ring: allow enq fns to return free space value
  examples/quota_watermark: use ring space for watermarks
  ring: allow dequeue fns to return remaining entry count
  ring: reduce scope of local variables
  ring: separate out head index manipulation for enq/deq
  ring: create common function for updating tail idx
  ring: allow macros to work with any type of object
  ring: add object size parameter to memory size calculation
  ring: add event ring implementation

 app/pdump/main.c                                   |   3 +-
 app/test-pipeline/pipeline_hash.c                  |   5 +-
 app/test-pipeline/runtime.c                        |  19 +-
 app/test/Makefile                                  |   1 +
 app/test/commands.c                                |  52 --
 app/test/test_event_ring.c                         |  85 +++
 app/test/test_link_bonding_mode4.c                 |   6 +-
 app/test/test_pmd_ring_perf.c                      |  12 +-
 app/test/test_ring.c                               | 704 ++-----------------
 app/test/test_ring_perf.c                          |  36 +-
 app/test/test_table_acl.c                          |   2 +-
 app/test/test_table_pipeline.c                     |   2 +-
 app/test/test_table_ports.c                        |  12 +-
 app/test/virtual_pmd.c                             |   8 +-
 config/common_base                                 |   3 -
 doc/guides/prog_guide/env_abstraction_layer.rst    |   5 -
 doc/guides/prog_guide/ring_lib.rst                 |   7 -
 doc/guides/sample_app_ug/server_node_efd.rst       |   2 +-
 drivers/crypto/null/null_crypto_pmd.c              |   2 +-
 drivers/crypto/null/null_crypto_pmd_ops.c          |   2 +-
 drivers/net/bonding/rte_eth_bond_pmd.c             |   3 +-
 drivers/net/ring/rte_eth_ring.c                    |   4 +-
 examples/distributor/main.c                        |   5 +-
 examples/load_balancer/runtime.c                   |  34 +-
 .../client_server_mp/mp_client/client.c            |   9 +-
 .../client_server_mp/mp_server/main.c              |   2 +-
 examples/packet_ordering/main.c                    |  13 +-
 examples/qos_sched/app_thread.c                    |  14 +-
 examples/quota_watermark/qw/init.c                 |   5 +-
 examples/quota_watermark/qw/main.c                 |  15 +-
 examples/quota_watermark/qw/main.h                 |   1 +
 examples/quota_watermark/qwctl/commands.c          |   2 +-
 examples/quota_watermark/qwctl/qwctl.c             |   2 +
 examples/quota_watermark/qwctl/qwctl.h             |   1 +
 examples/server_node_efd/node/node.c               |   2 +-
 examples/server_node_efd/server/main.c             |   2 +-
 lib/librte_hash/rte_cuckoo_hash.c                  |   5 +-
 lib/librte_mempool/rte_mempool_ring.c              |  12 +-
 lib/librte_pdump/rte_pdump.c                       |   2 +-
 lib/librte_port/rte_port_frag.c                    |   3 +-
 lib/librte_port/rte_port_ras.c                     |   2 +-
 lib/librte_port/rte_port_ring.c                    |  34 +-
 lib/librte_ring/Makefile                           |   2 +
 lib/librte_ring/rte_event_ring.c                   | 220 ++++++
 lib/librte_ring/rte_event_ring.h                   | 507 ++++++++++++++
 lib/librte_ring/rte_ring.c                         |  82 +--
 lib/librte_ring/rte_ring.h                         | 762 ++++++++-------------
 47 files changed, 1340 insertions(+), 1373 deletions(-)
 create mode 100644 app/test/test_event_ring.c
 create mode 100644 lib/librte_ring/rte_event_ring.c
 create mode 100644 lib/librte_ring/rte_event_ring.h

-- 
2.9.3

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 01/19] app/pdump: fix duplicate macro definition
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (2 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 00/19] ring cleanup and generalization Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 02/19] ring: remove split cacheline build setting Bruce Richardson
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

RTE_RING_SZ_MASK is redefined here with the original definition in
rte_ring.h. Since rte_ring.h is already included, just remove the
duplicate definition here.

Fixes: caa7028276b8 ("app/pdump: add tool for packet capturing")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/pdump/main.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/app/pdump/main.c b/app/pdump/main.c
index f3ef181..b88090d 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -92,7 +92,6 @@
 #define BURST_SIZE 32
 #define NUM_VDEVS 2
 
-#define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */
 /* true if x is a power of 2 */
 #define POWEROF2(x) ((((x)-1) & (x)) == 0)
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 02/19] ring: remove split cacheline build setting
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (3 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 01/19] app/pdump: fix duplicate macro definition Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 03/19] ring: create common structure for prod and cons metadata Bruce Richardson
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

There has been for some time in the rte_rings a build-time config value
to optionally split the producer and consumer information on to separate
cachelines. This should not really need to be a tunable, so just remove
the option and set the information to be always split. For improved
performance use 128B rather than 64B alignment since it stops the producer
and consumer data being on adjacent cachelines.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 config/common_base         | 1 -
 lib/librte_ring/rte_ring.c | 2 --
 lib/librte_ring/rte_ring.h | 8 ++------
 3 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/config/common_base b/config/common_base
index 71a4fcb..7691647 100644
--- a/config/common_base
+++ b/config/common_base
@@ -448,7 +448,6 @@ CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 #
 CONFIG_RTE_LIBRTE_RING=y
 CONFIG_RTE_LIBRTE_RING_DEBUG=n
-CONFIG_RTE_RING_SPLIT_PROD_CONS=n
 CONFIG_RTE_RING_PAUSE_REP_COUNT=0
 
 #
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index ca0a108..4bc6da1 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -127,10 +127,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
 	/* compilation-time checks */
 	RTE_BUILD_BUG_ON((sizeof(struct rte_ring) &
 			  RTE_CACHE_LINE_MASK) != 0);
-#ifdef RTE_RING_SPLIT_PROD_CONS
 	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, cons) &
 			  RTE_CACHE_LINE_MASK) != 0);
-#endif
 	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &
 			  RTE_CACHE_LINE_MASK) != 0);
 #ifdef RTE_LIBRTE_RING_DEBUG
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index e359aff..1bc2571 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -168,7 +168,7 @@ struct rte_ring {
 		uint32_t mask;           /**< Mask (size-1) of ring. */
 		volatile uint32_t head;  /**< Producer head. */
 		volatile uint32_t tail;  /**< Producer tail. */
-	} prod __rte_cache_aligned;
+	} prod __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
 
 	/** Ring consumer status. */
 	struct cons {
@@ -177,11 +177,7 @@ struct rte_ring {
 		uint32_t mask;           /**< Mask (size-1) of ring. */
 		volatile uint32_t head;  /**< Consumer head. */
 		volatile uint32_t tail;  /**< Consumer tail. */
-#ifdef RTE_RING_SPLIT_PROD_CONS
-	} cons __rte_cache_aligned;
-#else
-	} cons;
-#endif
+	} cons __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
 
 #ifdef RTE_LIBRTE_RING_DEBUG
 	struct rte_ring_debug_stats stats[RTE_MAX_LCORE];
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 03/19] ring: create common structure for prod and cons metadata
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (4 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 02/19] ring: remove split cacheline build setting Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 04/19] ring: add a function to return the ring size Bruce Richardson
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

create a common structure to hold the metadata for the producer and
the consumer, since both need essentially the same information - the
head and tail values, the ring size and mask.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.h | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 1bc2571..0a5c9ff 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -139,6 +139,19 @@ struct rte_ring_debug_stats {
 
 struct rte_memzone; /* forward declaration, so as not to require memzone.h */
 
+/* structure to hold a pair of head/tail values and other metadata */
+struct rte_ring_ht_ptr {
+	volatile uint32_t head;  /**< Prod/consumer head. */
+	volatile uint32_t tail;  /**< Prod/consumer tail. */
+	uint32_t size;           /**< Size of ring. */
+	uint32_t mask;           /**< Mask (size-1) of ring. */
+	union {
+		uint32_t sp_enqueue; /**< True, if single producer. */
+		uint32_t sc_dequeue; /**< True, if single consumer. */
+	};
+	uint32_t watermark;      /**< Max items before EDQUOT in producer. */
+};
+
 /**
  * An RTE ring structure.
  *
@@ -161,23 +174,10 @@ struct rte_ring {
 			/**< Memzone, if any, containing the rte_ring */
 
 	/** Ring producer status. */
-	struct prod {
-		uint32_t watermark;      /**< Maximum items before EDQUOT. */
-		uint32_t sp_enqueue;     /**< True, if single producer. */
-		uint32_t size;           /**< Size of ring. */
-		uint32_t mask;           /**< Mask (size-1) of ring. */
-		volatile uint32_t head;  /**< Producer head. */
-		volatile uint32_t tail;  /**< Producer tail. */
-	} prod __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
+	struct rte_ring_ht_ptr prod __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
 
 	/** Ring consumer status. */
-	struct cons {
-		uint32_t sc_dequeue;     /**< True, if single consumer. */
-		uint32_t size;           /**< Size of the ring. */
-		uint32_t mask;           /**< Mask (size-1) of ring. */
-		volatile uint32_t head;  /**< Consumer head. */
-		volatile uint32_t tail;  /**< Consumer tail. */
-	} cons __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
+	struct rte_ring_ht_ptr cons __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
 
 #ifdef RTE_LIBRTE_RING_DEBUG
 	struct rte_ring_debug_stats stats[RTE_MAX_LCORE];
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 04/19] ring: add a function to return the ring size
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (5 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 03/19] ring: create common structure for prod and cons metadata Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 05/19] crypto/null: use ring size function Bruce Richardson
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Applications and other libraries should not be reading inside the
rte_ring structure directly to get the ring size. Instead add a fn
to allow it to be queried.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 0a5c9ff..75bbcc1 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -1104,6 +1104,20 @@ rte_ring_free_count(const struct rte_ring *r)
 }
 
 /**
+ * Return the size of the ring.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   The number of elements which can be stored in the ring.
+ */
+static inline unsigned int
+rte_ring_get_size(struct rte_ring *r)
+{
+	return r->prod.size;
+}
+
+/**
  * Dump the status of all rings on the console
  *
  * @param f
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 05/19] crypto/null: use ring size function
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (6 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 04/19] ring: add a function to return the ring size Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 06/19] ring: eliminate duplication of size and mask fields Bruce Richardson
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Rather than reading the size directly from the ring structure,
use the dedicated function for that purpose.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 drivers/crypto/null/null_crypto_pmd_ops.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 26ff631..4a24537 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -193,7 +193,7 @@ null_crypto_pmd_qp_create_processed_pkts_ring(struct null_crypto_qp *qp,
 
 	r = rte_ring_lookup(qp->name);
 	if (r) {
-		if (r->prod.size >= ring_size) {
+		if (rte_ring_get_size(r) >= ring_size) {
 			NULL_CRYPTO_LOG_INFO(
 				"Reusing existing ring %s for processed packets",
 				qp->name);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 06/19] ring: eliminate duplication of size and mask fields
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (7 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 05/19] crypto/null: use ring size function Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 07/19] ring: remove debug setting Bruce Richardson
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

The size and mask fields are duplicated in both the producer and
consumer data structures. Move them out of that into the top level
structure so they are not duplicated.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_ring.c       |  6 +++---
 lib/librte_ring/rte_ring.c | 20 ++++++++++----------
 lib/librte_ring/rte_ring.h | 32 ++++++++++++++++----------------
 3 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index ebcb896..af74e7d 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -148,7 +148,7 @@ check_live_watermark_change(__attribute__((unused)) void *dummy)
 		}
 
 		/* read watermark, the only change allowed is from 16 to 32 */
-		watermark = r->prod.watermark;
+		watermark = r->watermark;
 		if (watermark != watermark_old &&
 		    (watermark_old != 16 || watermark != 32)) {
 			printf("Bad watermark change %u -> %u\n", watermark_old,
@@ -213,7 +213,7 @@ test_set_watermark( void ){
 		printf( " ring lookup failed\n" );
 		goto error;
 	}
-	count = r->prod.size*2;
+	count = r->size*2;
 	setwm = rte_ring_set_water_mark(r, count);
 	if (setwm != -EINVAL){
 		printf("Test failed to detect invalid watermark count value\n");
@@ -222,7 +222,7 @@ test_set_watermark( void ){
 
 	count = 0;
 	rte_ring_set_water_mark(r, count);
-	if (r->prod.watermark != r->prod.size) {
+	if (r->watermark != r->size) {
 		printf("Test failed to detect invalid watermark count value\n");
 		goto error;
 	}
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index 4bc6da1..183594f 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -144,11 +144,11 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
 	if (ret < 0 || ret >= (int)sizeof(r->name))
 		return -ENAMETOOLONG;
 	r->flags = flags;
-	r->prod.watermark = count;
+	r->watermark = count;
 	r->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);
 	r->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);
-	r->prod.size = r->cons.size = count;
-	r->prod.mask = r->cons.mask = count-1;
+	r->size = count;
+	r->mask = count-1;
 	r->prod.head = r->cons.head = 0;
 	r->prod.tail = r->cons.tail = 0;
 
@@ -269,14 +269,14 @@ rte_ring_free(struct rte_ring *r)
 int
 rte_ring_set_water_mark(struct rte_ring *r, unsigned count)
 {
-	if (count >= r->prod.size)
+	if (count >= r->size)
 		return -EINVAL;
 
 	/* if count is 0, disable the watermarking */
 	if (count == 0)
-		count = r->prod.size;
+		count = r->size;
 
-	r->prod.watermark = count;
+	r->watermark = count;
 	return 0;
 }
 
@@ -291,17 +291,17 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
 
 	fprintf(f, "ring <%s>@%p\n", r->name, r);
 	fprintf(f, "  flags=%x\n", r->flags);
-	fprintf(f, "  size=%"PRIu32"\n", r->prod.size);
+	fprintf(f, "  size=%"PRIu32"\n", r->size);
 	fprintf(f, "  ct=%"PRIu32"\n", r->cons.tail);
 	fprintf(f, "  ch=%"PRIu32"\n", r->cons.head);
 	fprintf(f, "  pt=%"PRIu32"\n", r->prod.tail);
 	fprintf(f, "  ph=%"PRIu32"\n", r->prod.head);
 	fprintf(f, "  used=%u\n", rte_ring_count(r));
 	fprintf(f, "  avail=%u\n", rte_ring_free_count(r));
-	if (r->prod.watermark == r->prod.size)
+	if (r->watermark == r->size)
 		fprintf(f, "  watermark=0\n");
 	else
-		fprintf(f, "  watermark=%"PRIu32"\n", r->prod.watermark);
+		fprintf(f, "  watermark=%"PRIu32"\n", r->watermark);
 
 	/* sum and dump statistics */
 #ifdef RTE_LIBRTE_RING_DEBUG
@@ -318,7 +318,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
 		sum.deq_fail_bulk += r->stats[lcore_id].deq_fail_bulk;
 		sum.deq_fail_objs += r->stats[lcore_id].deq_fail_objs;
 	}
-	fprintf(f, "  size=%"PRIu32"\n", r->prod.size);
+	fprintf(f, "  size=%"PRIu32"\n", r->size);
 	fprintf(f, "  enq_success_bulk=%"PRIu64"\n", sum.enq_success_bulk);
 	fprintf(f, "  enq_success_objs=%"PRIu64"\n", sum.enq_success_objs);
 	fprintf(f, "  enq_quota_bulk=%"PRIu64"\n", sum.enq_quota_bulk);
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 75bbcc1..1e4b8ad 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -143,13 +143,10 @@ struct rte_memzone; /* forward declaration, so as not to require memzone.h */
 struct rte_ring_ht_ptr {
 	volatile uint32_t head;  /**< Prod/consumer head. */
 	volatile uint32_t tail;  /**< Prod/consumer tail. */
-	uint32_t size;           /**< Size of ring. */
-	uint32_t mask;           /**< Mask (size-1) of ring. */
 	union {
 		uint32_t sp_enqueue; /**< True, if single producer. */
 		uint32_t sc_dequeue; /**< True, if single consumer. */
 	};
-	uint32_t watermark;      /**< Max items before EDQUOT in producer. */
 };
 
 /**
@@ -169,9 +166,12 @@ struct rte_ring {
 	 * next time the ABI changes
 	 */
 	char name[RTE_MEMZONE_NAMESIZE];    /**< Name of the ring. */
-	int flags;                       /**< Flags supplied at creation. */
+	int flags;               /**< Flags supplied at creation. */
 	const struct rte_memzone *memzone;
 			/**< Memzone, if any, containing the rte_ring */
+	uint32_t size;           /**< Size of ring. */
+	uint32_t mask;           /**< Mask (size-1) of ring. */
+	uint32_t watermark;      /**< Max items before EDQUOT in producer. */
 
 	/** Ring producer status. */
 	struct rte_ring_ht_ptr prod __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
@@ -350,7 +350,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  * Placed here since identical code needed in both
  * single and multi producer enqueue functions */
 #define ENQUEUE_PTRS() do { \
-	const uint32_t size = r->prod.size; \
+	const uint32_t size = r->size; \
 	uint32_t idx = prod_head & mask; \
 	if (likely(idx + n < size)) { \
 		for (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \
@@ -377,7 +377,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  * single and multi consumer dequeue functions */
 #define DEQUEUE_PTRS() do { \
 	uint32_t idx = cons_head & mask; \
-	const uint32_t size = r->cons.size; \
+	const uint32_t size = r->size; \
 	if (likely(idx + n < size)) { \
 		for (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\
 			obj_table[i] = r->ring[idx]; \
@@ -432,7 +432,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	const unsigned max = n;
 	int success;
 	unsigned i, rep = 0;
-	uint32_t mask = r->prod.mask;
+	uint32_t mask = r->mask;
 	int ret;
 
 	/* Avoid the unnecessary cmpset operation below, which is also
@@ -480,7 +480,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	rte_smp_wmb();
 
 	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
+	if (unlikely(((mask + 1) - free_entries + n) > r->watermark)) {
 		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
 				(int)(n | RTE_RING_QUOT_EXCEED);
 		__RING_STAT_ADD(r, enq_quota, n);
@@ -539,7 +539,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	uint32_t prod_head, cons_tail;
 	uint32_t prod_next, free_entries;
 	unsigned i;
-	uint32_t mask = r->prod.mask;
+	uint32_t mask = r->mask;
 	int ret;
 
 	prod_head = r->prod.head;
@@ -575,7 +575,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	rte_smp_wmb();
 
 	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
+	if (unlikely(((mask + 1) - free_entries + n) > r->watermark)) {
 		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
 			(int)(n | RTE_RING_QUOT_EXCEED);
 		__RING_STAT_ADD(r, enq_quota, n);
@@ -625,7 +625,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 	const unsigned max = n;
 	int success;
 	unsigned i, rep = 0;
-	uint32_t mask = r->prod.mask;
+	uint32_t mask = r->mask;
 
 	/* Avoid the unnecessary cmpset operation below, which is also
 	 * potentially harmful when n equals 0. */
@@ -722,7 +722,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 	uint32_t cons_head, prod_tail;
 	uint32_t cons_next, entries;
 	unsigned i;
-	uint32_t mask = r->prod.mask;
+	uint32_t mask = r->mask;
 
 	cons_head = r->cons.head;
 	prod_tail = r->prod.tail;
@@ -1051,7 +1051,7 @@ rte_ring_full(const struct rte_ring *r)
 {
 	uint32_t prod_tail = r->prod.tail;
 	uint32_t cons_tail = r->cons.tail;
-	return ((cons_tail - prod_tail - 1) & r->prod.mask) == 0;
+	return ((cons_tail - prod_tail - 1) & r->mask) == 0;
 }
 
 /**
@@ -1084,7 +1084,7 @@ rte_ring_count(const struct rte_ring *r)
 {
 	uint32_t prod_tail = r->prod.tail;
 	uint32_t cons_tail = r->cons.tail;
-	return (prod_tail - cons_tail) & r->prod.mask;
+	return (prod_tail - cons_tail) & r->mask;
 }
 
 /**
@@ -1100,7 +1100,7 @@ rte_ring_free_count(const struct rte_ring *r)
 {
 	uint32_t prod_tail = r->prod.tail;
 	uint32_t cons_tail = r->cons.tail;
-	return (cons_tail - prod_tail - 1) & r->prod.mask;
+	return (cons_tail - prod_tail - 1) & r->mask;
 }
 
 /**
@@ -1114,7 +1114,7 @@ rte_ring_free_count(const struct rte_ring *r)
 static inline unsigned int
 rte_ring_get_size(struct rte_ring *r)
 {
-	return r->prod.size;
+	return r->size;
 }
 
 /**
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 07/19] ring: remove debug setting
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (8 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 06/19] ring: eliminate duplication of size and mask fields Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 08/19] ring: remove the yield when waiting for tail update Bruce Richardson
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

The debug option only provided some statistics to the user most of
which could be tracked by the application itself. Remove this as a
compile time option, and feature, simplifying the code somewhat.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_ring.c               | 410 -------------------------------------
 config/common_base                 |   1 -
 doc/guides/prog_guide/ring_lib.rst |   7 -
 lib/librte_ring/rte_ring.c         |  41 ----
 lib/librte_ring/rte_ring.h         |  97 ++-------
 5 files changed, 12 insertions(+), 544 deletions(-)

diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index af74e7d..0cf55b5 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -763,412 +763,6 @@ test_ring_burst_basic(void)
 	return -1;
 }
 
-static int
-test_ring_stats(void)
-{
-
-#ifndef RTE_LIBRTE_RING_DEBUG
-	printf("Enable RTE_LIBRTE_RING_DEBUG to test ring stats.\n");
-	return 0;
-#else
-	void **src = NULL, **cur_src = NULL, **dst = NULL, **cur_dst = NULL;
-	int ret;
-	unsigned i;
-	unsigned num_items            = 0;
-	unsigned failed_enqueue_ops   = 0;
-	unsigned failed_enqueue_items = 0;
-	unsigned failed_dequeue_ops   = 0;
-	unsigned failed_dequeue_items = 0;
-	unsigned last_enqueue_ops     = 0;
-	unsigned last_enqueue_items   = 0;
-	unsigned last_quota_ops       = 0;
-	unsigned last_quota_items     = 0;
-	unsigned lcore_id = rte_lcore_id();
-	struct rte_ring_debug_stats *ring_stats = &r->stats[lcore_id];
-
-	printf("Test the ring stats.\n");
-
-	/* Reset the watermark in case it was set in another test. */
-	rte_ring_set_water_mark(r, 0);
-
-	/* Reset the ring stats. */
-	memset(&r->stats[lcore_id], 0, sizeof(r->stats[lcore_id]));
-
-	/* Allocate some dummy object pointers. */
-	src = malloc(RING_SIZE*2*sizeof(void *));
-	if (src == NULL)
-		goto fail;
-
-	for (i = 0; i < RING_SIZE*2 ; i++) {
-		src[i] = (void *)(unsigned long)i;
-	}
-
-	/* Allocate some memory for copied objects. */
-	dst = malloc(RING_SIZE*2*sizeof(void *));
-	if (dst == NULL)
-		goto fail;
-
-	memset(dst, 0, RING_SIZE*2*sizeof(void *));
-
-	/* Set the head and tail pointers. */
-	cur_src = src;
-	cur_dst = dst;
-
-	/* Do Enqueue tests. */
-	printf("Test the dequeue stats.\n");
-
-	/* Fill the ring up to RING_SIZE -1. */
-	printf("Fill the ring.\n");
-	for (i = 0; i< (RING_SIZE/MAX_BULK); i++) {
-		rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK);
-		cur_src += MAX_BULK;
-	}
-
-	/* Adjust for final enqueue = MAX_BULK -1. */
-	cur_src--;
-
-	printf("Verify that the ring is full.\n");
-	if (rte_ring_full(r) != 1)
-		goto fail;
-
-
-	printf("Verify the enqueue success stats.\n");
-	/* Stats should match above enqueue operations to fill the ring. */
-	if (ring_stats->enq_success_bulk != (RING_SIZE/MAX_BULK))
-		goto fail;
-
-	/* Current max objects is RING_SIZE -1. */
-	if (ring_stats->enq_success_objs != RING_SIZE -1)
-		goto fail;
-
-	/* Shouldn't have any failures yet. */
-	if (ring_stats->enq_fail_bulk != 0)
-		goto fail;
-	if (ring_stats->enq_fail_objs != 0)
-		goto fail;
-
-
-	printf("Test stats for SP burst enqueue to a full ring.\n");
-	num_items = 2;
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != 0)
-		goto fail;
-
-	failed_enqueue_ops   += 1;
-	failed_enqueue_items += num_items;
-
-	/* The enqueue should have failed. */
-	if (ring_stats->enq_fail_bulk != failed_enqueue_ops)
-		goto fail;
-	if (ring_stats->enq_fail_objs != failed_enqueue_items)
-		goto fail;
-
-
-	printf("Test stats for SP bulk enqueue to a full ring.\n");
-	num_items = 4;
-	ret = rte_ring_sp_enqueue_bulk(r, cur_src, num_items);
-	if (ret != -ENOBUFS)
-		goto fail;
-
-	failed_enqueue_ops   += 1;
-	failed_enqueue_items += num_items;
-
-	/* The enqueue should have failed. */
-	if (ring_stats->enq_fail_bulk != failed_enqueue_ops)
-		goto fail;
-	if (ring_stats->enq_fail_objs != failed_enqueue_items)
-		goto fail;
-
-
-	printf("Test stats for MP burst enqueue to a full ring.\n");
-	num_items = 8;
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != 0)
-		goto fail;
-
-	failed_enqueue_ops   += 1;
-	failed_enqueue_items += num_items;
-
-	/* The enqueue should have failed. */
-	if (ring_stats->enq_fail_bulk != failed_enqueue_ops)
-		goto fail;
-	if (ring_stats->enq_fail_objs != failed_enqueue_items)
-		goto fail;
-
-
-	printf("Test stats for MP bulk enqueue to a full ring.\n");
-	num_items = 16;
-	ret = rte_ring_mp_enqueue_bulk(r, cur_src, num_items);
-	if (ret != -ENOBUFS)
-		goto fail;
-
-	failed_enqueue_ops   += 1;
-	failed_enqueue_items += num_items;
-
-	/* The enqueue should have failed. */
-	if (ring_stats->enq_fail_bulk != failed_enqueue_ops)
-		goto fail;
-	if (ring_stats->enq_fail_objs != failed_enqueue_items)
-		goto fail;
-
-
-	/* Do Dequeue tests. */
-	printf("Test the dequeue stats.\n");
-
-	printf("Empty the ring.\n");
-	for (i = 0; i<RING_SIZE/MAX_BULK; i++) {
-		rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK);
-		cur_dst += MAX_BULK;
-	}
-
-	/* There was only RING_SIZE -1 objects to dequeue. */
-	cur_dst++;
-
-	printf("Verify ring is empty.\n");
-	if (1 != rte_ring_empty(r))
-		goto fail;
-
-	printf("Verify the dequeue success stats.\n");
-	/* Stats should match above dequeue operations. */
-	if (ring_stats->deq_success_bulk != (RING_SIZE/MAX_BULK))
-		goto fail;
-
-	/* Objects dequeued is RING_SIZE -1. */
-	if (ring_stats->deq_success_objs != RING_SIZE -1)
-		goto fail;
-
-	/* Shouldn't have any dequeue failure stats yet. */
-	if (ring_stats->deq_fail_bulk != 0)
-		goto fail;
-
-	printf("Test stats for SC burst dequeue with an empty ring.\n");
-	num_items = 2;
-	ret = rte_ring_sc_dequeue_burst(r, cur_dst, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != 0)
-		goto fail;
-
-	failed_dequeue_ops   += 1;
-	failed_dequeue_items += num_items;
-
-	/* The dequeue should have failed. */
-	if (ring_stats->deq_fail_bulk != failed_dequeue_ops)
-		goto fail;
-	if (ring_stats->deq_fail_objs != failed_dequeue_items)
-		goto fail;
-
-
-	printf("Test stats for SC bulk dequeue with an empty ring.\n");
-	num_items = 4;
-	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, num_items);
-	if (ret != -ENOENT)
-		goto fail;
-
-	failed_dequeue_ops   += 1;
-	failed_dequeue_items += num_items;
-
-	/* The dequeue should have failed. */
-	if (ring_stats->deq_fail_bulk != failed_dequeue_ops)
-		goto fail;
-	if (ring_stats->deq_fail_objs != failed_dequeue_items)
-		goto fail;
-
-
-	printf("Test stats for MC burst dequeue with an empty ring.\n");
-	num_items = 8;
-	ret = rte_ring_mc_dequeue_burst(r, cur_dst, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != 0)
-		goto fail;
-	failed_dequeue_ops   += 1;
-	failed_dequeue_items += num_items;
-
-	/* The dequeue should have failed. */
-	if (ring_stats->deq_fail_bulk != failed_dequeue_ops)
-		goto fail;
-	if (ring_stats->deq_fail_objs != failed_dequeue_items)
-		goto fail;
-
-
-	printf("Test stats for MC bulk dequeue with an empty ring.\n");
-	num_items = 16;
-	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, num_items);
-	if (ret != -ENOENT)
-		goto fail;
-
-	failed_dequeue_ops   += 1;
-	failed_dequeue_items += num_items;
-
-	/* The dequeue should have failed. */
-	if (ring_stats->deq_fail_bulk != failed_dequeue_ops)
-		goto fail;
-	if (ring_stats->deq_fail_objs != failed_dequeue_items)
-		goto fail;
-
-
-	printf("Test total enqueue/dequeue stats.\n");
-	/* At this point the enqueue and dequeue stats should be the same. */
-	if (ring_stats->enq_success_bulk != ring_stats->deq_success_bulk)
-		goto fail;
-	if (ring_stats->enq_success_objs != ring_stats->deq_success_objs)
-		goto fail;
-	if (ring_stats->enq_fail_bulk    != ring_stats->deq_fail_bulk)
-		goto fail;
-	if (ring_stats->enq_fail_objs    != ring_stats->deq_fail_objs)
-		goto fail;
-
-
-	/* Watermark Tests. */
-	printf("Test the watermark/quota stats.\n");
-
-	printf("Verify the initial watermark stats.\n");
-	/* Watermark stats should be 0 since there is no watermark. */
-	if (ring_stats->enq_quota_bulk != 0)
-		goto fail;
-	if (ring_stats->enq_quota_objs != 0)
-		goto fail;
-
-	/* Set a watermark. */
-	rte_ring_set_water_mark(r, 16);
-
-	/* Reset pointers. */
-	cur_src = src;
-	cur_dst = dst;
-
-	last_enqueue_ops   = ring_stats->enq_success_bulk;
-	last_enqueue_items = ring_stats->enq_success_objs;
-
-
-	printf("Test stats for SP burst enqueue below watermark.\n");
-	num_items = 8;
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != num_items)
-		goto fail;
-
-	/* Watermark stats should still be 0. */
-	if (ring_stats->enq_quota_bulk != 0)
-		goto fail;
-	if (ring_stats->enq_quota_objs != 0)
-		goto fail;
-
-	/* Success stats should have increased. */
-	if (ring_stats->enq_success_bulk != last_enqueue_ops + 1)
-		goto fail;
-	if (ring_stats->enq_success_objs != last_enqueue_items + num_items)
-		goto fail;
-
-	last_enqueue_ops   = ring_stats->enq_success_bulk;
-	last_enqueue_items = ring_stats->enq_success_objs;
-
-
-	printf("Test stats for SP burst enqueue at watermark.\n");
-	num_items = 8;
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != num_items)
-		goto fail;
-
-	/* Watermark stats should have changed. */
-	if (ring_stats->enq_quota_bulk != 1)
-		goto fail;
-	if (ring_stats->enq_quota_objs != num_items)
-		goto fail;
-
-	last_quota_ops   = ring_stats->enq_quota_bulk;
-	last_quota_items = ring_stats->enq_quota_objs;
-
-
-	printf("Test stats for SP burst enqueue above watermark.\n");
-	num_items = 1;
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != num_items)
-		goto fail;
-
-	/* Watermark stats should have changed. */
-	if (ring_stats->enq_quota_bulk != last_quota_ops +1)
-		goto fail;
-	if (ring_stats->enq_quota_objs != last_quota_items + num_items)
-		goto fail;
-
-	last_quota_ops   = ring_stats->enq_quota_bulk;
-	last_quota_items = ring_stats->enq_quota_objs;
-
-
-	printf("Test stats for MP burst enqueue above watermark.\n");
-	num_items = 2;
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, num_items);
-	if ((ret & RTE_RING_SZ_MASK) != num_items)
-		goto fail;
-
-	/* Watermark stats should have changed. */
-	if (ring_stats->enq_quota_bulk != last_quota_ops +1)
-		goto fail;
-	if (ring_stats->enq_quota_objs != last_quota_items + num_items)
-		goto fail;
-
-	last_quota_ops   = ring_stats->enq_quota_bulk;
-	last_quota_items = ring_stats->enq_quota_objs;
-
-
-	printf("Test stats for SP bulk enqueue above watermark.\n");
-	num_items = 4;
-	ret = rte_ring_sp_enqueue_bulk(r, cur_src, num_items);
-	if (ret != -EDQUOT)
-		goto fail;
-
-	/* Watermark stats should have changed. */
-	if (ring_stats->enq_quota_bulk != last_quota_ops +1)
-		goto fail;
-	if (ring_stats->enq_quota_objs != last_quota_items + num_items)
-		goto fail;
-
-	last_quota_ops   = ring_stats->enq_quota_bulk;
-	last_quota_items = ring_stats->enq_quota_objs;
-
-
-	printf("Test stats for MP bulk enqueue above watermark.\n");
-	num_items = 8;
-	ret = rte_ring_mp_enqueue_bulk(r, cur_src, num_items);
-	if (ret != -EDQUOT)
-		goto fail;
-
-	/* Watermark stats should have changed. */
-	if (ring_stats->enq_quota_bulk != last_quota_ops +1)
-		goto fail;
-	if (ring_stats->enq_quota_objs != last_quota_items + num_items)
-		goto fail;
-
-	printf("Test watermark success stats.\n");
-	/* Success stats should be same as last non-watermarked enqueue. */
-	if (ring_stats->enq_success_bulk != last_enqueue_ops)
-		goto fail;
-	if (ring_stats->enq_success_objs != last_enqueue_items)
-		goto fail;
-
-
-	/* Cleanup. */
-
-	/* Empty the ring. */
-	for (i = 0; i<RING_SIZE/MAX_BULK; i++) {
-		rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK);
-		cur_dst += MAX_BULK;
-	}
-
-	/* Reset the watermark. */
-	rte_ring_set_water_mark(r, 0);
-
-	/* Reset the ring stats. */
-	memset(&r->stats[lcore_id], 0, sizeof(r->stats[lcore_id]));
-
-	/* Free memory before test completed */
-	free(src);
-	free(dst);
-	return 0;
-
-fail:
-	free(src);
-	free(dst);
-	return -1;
-#endif
-}
-
 /*
  * it will always fail to create ring with a wrong ring size number in this function
  */
@@ -1335,10 +929,6 @@ test_ring(void)
 	if (test_ring_basic() < 0)
 		return -1;
 
-	/* ring stats */
-	if (test_ring_stats() < 0)
-		return -1;
-
 	/* basic operations */
 	if (test_live_watermark_change() < 0)
 		return -1;
diff --git a/config/common_base b/config/common_base
index 7691647..3bbe3aa 100644
--- a/config/common_base
+++ b/config/common_base
@@ -447,7 +447,6 @@ CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
-CONFIG_RTE_LIBRTE_RING_DEBUG=n
 CONFIG_RTE_RING_PAUSE_REP_COUNT=0
 
 #
diff --git a/doc/guides/prog_guide/ring_lib.rst b/doc/guides/prog_guide/ring_lib.rst
index 9f69753..d4ab502 100644
--- a/doc/guides/prog_guide/ring_lib.rst
+++ b/doc/guides/prog_guide/ring_lib.rst
@@ -110,13 +110,6 @@ Once an enqueue operation reaches the high water mark, the producer is notified,
 
 This mechanism can be used, for example, to exert a back pressure on I/O to inform the LAN to PAUSE.
 
-Debug
-~~~~~
-
-When debug is enabled (CONFIG_RTE_LIBRTE_RING_DEBUG is set),
-the library stores some per-ring statistic counters about the number of enqueues/dequeues.
-These statistics are per-core to avoid concurrent accesses or atomic operations.
-
 Use Cases
 ---------
 
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index 183594f..2a04f05 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -131,12 +131,6 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
 			  RTE_CACHE_LINE_MASK) != 0);
 	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &
 			  RTE_CACHE_LINE_MASK) != 0);
-#ifdef RTE_LIBRTE_RING_DEBUG
-	RTE_BUILD_BUG_ON((sizeof(struct rte_ring_debug_stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
 
 	/* init the ring structure */
 	memset(r, 0, sizeof(*r));
@@ -284,11 +278,6 @@ rte_ring_set_water_mark(struct rte_ring *r, unsigned count)
 void
 rte_ring_dump(FILE *f, const struct rte_ring *r)
 {
-#ifdef RTE_LIBRTE_RING_DEBUG
-	struct rte_ring_debug_stats sum;
-	unsigned lcore_id;
-#endif
-
 	fprintf(f, "ring <%s>@%p\n", r->name, r);
 	fprintf(f, "  flags=%x\n", r->flags);
 	fprintf(f, "  size=%"PRIu32"\n", r->size);
@@ -302,36 +291,6 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
 		fprintf(f, "  watermark=0\n");
 	else
 		fprintf(f, "  watermark=%"PRIu32"\n", r->watermark);
-
-	/* sum and dump statistics */
-#ifdef RTE_LIBRTE_RING_DEBUG
-	memset(&sum, 0, sizeof(sum));
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		sum.enq_success_bulk += r->stats[lcore_id].enq_success_bulk;
-		sum.enq_success_objs += r->stats[lcore_id].enq_success_objs;
-		sum.enq_quota_bulk += r->stats[lcore_id].enq_quota_bulk;
-		sum.enq_quota_objs += r->stats[lcore_id].enq_quota_objs;
-		sum.enq_fail_bulk += r->stats[lcore_id].enq_fail_bulk;
-		sum.enq_fail_objs += r->stats[lcore_id].enq_fail_objs;
-		sum.deq_success_bulk += r->stats[lcore_id].deq_success_bulk;
-		sum.deq_success_objs += r->stats[lcore_id].deq_success_objs;
-		sum.deq_fail_bulk += r->stats[lcore_id].deq_fail_bulk;
-		sum.deq_fail_objs += r->stats[lcore_id].deq_fail_objs;
-	}
-	fprintf(f, "  size=%"PRIu32"\n", r->size);
-	fprintf(f, "  enq_success_bulk=%"PRIu64"\n", sum.enq_success_bulk);
-	fprintf(f, "  enq_success_objs=%"PRIu64"\n", sum.enq_success_objs);
-	fprintf(f, "  enq_quota_bulk=%"PRIu64"\n", sum.enq_quota_bulk);
-	fprintf(f, "  enq_quota_objs=%"PRIu64"\n", sum.enq_quota_objs);
-	fprintf(f, "  enq_fail_bulk=%"PRIu64"\n", sum.enq_fail_bulk);
-	fprintf(f, "  enq_fail_objs=%"PRIu64"\n", sum.enq_fail_objs);
-	fprintf(f, "  deq_success_bulk=%"PRIu64"\n", sum.deq_success_bulk);
-	fprintf(f, "  deq_success_objs=%"PRIu64"\n", sum.deq_success_objs);
-	fprintf(f, "  deq_fail_bulk=%"PRIu64"\n", sum.deq_fail_bulk);
-	fprintf(f, "  deq_fail_objs=%"PRIu64"\n", sum.deq_fail_objs);
-#else
-	fprintf(f, "  no statistics available\n");
-#endif
 }
 
 /* dump the status of all rings on the console */
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 1e4b8ad..c059daa 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -109,24 +109,6 @@ enum rte_ring_queue_behavior {
 	RTE_RING_QUEUE_VARIABLE   /* Enq/Deq as many items as possible from ring */
 };
 
-#ifdef RTE_LIBRTE_RING_DEBUG
-/**
- * A structure that stores the ring statistics (per-lcore).
- */
-struct rte_ring_debug_stats {
-	uint64_t enq_success_bulk; /**< Successful enqueues number. */
-	uint64_t enq_success_objs; /**< Objects successfully enqueued. */
-	uint64_t enq_quota_bulk;   /**< Successful enqueues above watermark. */
-	uint64_t enq_quota_objs;   /**< Objects enqueued above watermark. */
-	uint64_t enq_fail_bulk;    /**< Failed enqueues number. */
-	uint64_t enq_fail_objs;    /**< Objects that failed to be enqueued. */
-	uint64_t deq_success_bulk; /**< Successful dequeues number. */
-	uint64_t deq_success_objs; /**< Objects successfully dequeued. */
-	uint64_t deq_fail_bulk;    /**< Failed dequeues number. */
-	uint64_t deq_fail_objs;    /**< Objects that failed to be dequeued. */
-} __rte_cache_aligned;
-#endif
-
 #define RTE_RING_MZ_PREFIX "RG_"
 /**< The maximum length of a ring name. */
 #define RTE_RING_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
@@ -179,10 +161,6 @@ struct rte_ring {
 	/** Ring consumer status. */
 	struct rte_ring_ht_ptr cons __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
 
-#ifdef RTE_LIBRTE_RING_DEBUG
-	struct rte_ring_debug_stats stats[RTE_MAX_LCORE];
-#endif
-
 	void *ring[] __rte_cache_aligned;   /**< Memory space of ring starts here.
 	                                     * not volatile so need to be careful
 	                                     * about compiler re-ordering */
@@ -194,27 +172,6 @@ struct rte_ring {
 #define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */
 
 /**
- * @internal When debug is enabled, store ring statistics.
- * @param r
- *   A pointer to the ring.
- * @param name
- *   The name of the statistics field to increment in the ring.
- * @param n
- *   The number to add to the object-oriented statistics.
- */
-#ifdef RTE_LIBRTE_RING_DEBUG
-#define __RING_STAT_ADD(r, name, n) do {                        \
-		unsigned __lcore_id = rte_lcore_id();           \
-		if (__lcore_id < RTE_MAX_LCORE) {               \
-			r->stats[__lcore_id].name##_objs += n;  \
-			r->stats[__lcore_id].name##_bulk += 1;  \
-		}                                               \
-	} while(0)
-#else
-#define __RING_STAT_ADD(r, name, n) do {} while(0)
-#endif
-
-/**
  * Calculate the memory size needed for a ring
  *
  * This function returns the number of bytes needed for a ring, given
@@ -455,17 +412,12 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 
 		/* check that we have enough room in ring */
 		if (unlikely(n > free_entries)) {
-			if (behavior == RTE_RING_QUEUE_FIXED) {
-				__RING_STAT_ADD(r, enq_fail, n);
+			if (behavior == RTE_RING_QUEUE_FIXED)
 				return -ENOBUFS;
-			}
 			else {
 				/* No free entry available */
-				if (unlikely(free_entries == 0)) {
-					__RING_STAT_ADD(r, enq_fail, n);
+				if (unlikely(free_entries == 0))
 					return 0;
-				}
-
 				n = free_entries;
 			}
 		}
@@ -480,15 +432,11 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	rte_smp_wmb();
 
 	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->watermark)) {
+	if (unlikely(((mask + 1) - free_entries + n) > r->watermark))
 		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
 				(int)(n | RTE_RING_QUOT_EXCEED);
-		__RING_STAT_ADD(r, enq_quota, n);
-	}
-	else {
+	else
 		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
-		__RING_STAT_ADD(r, enq_success, n);
-	}
 
 	/*
 	 * If there are other enqueues in progress that preceded us,
@@ -552,17 +500,12 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 
 	/* check that we have enough room in ring */
 	if (unlikely(n > free_entries)) {
-		if (behavior == RTE_RING_QUEUE_FIXED) {
-			__RING_STAT_ADD(r, enq_fail, n);
+		if (behavior == RTE_RING_QUEUE_FIXED)
 			return -ENOBUFS;
-		}
 		else {
 			/* No free entry available */
-			if (unlikely(free_entries == 0)) {
-				__RING_STAT_ADD(r, enq_fail, n);
+			if (unlikely(free_entries == 0))
 				return 0;
-			}
-
 			n = free_entries;
 		}
 	}
@@ -575,15 +518,11 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	rte_smp_wmb();
 
 	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->watermark)) {
+	if (unlikely(((mask + 1) - free_entries + n) > r->watermark))
 		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
 			(int)(n | RTE_RING_QUOT_EXCEED);
-		__RING_STAT_ADD(r, enq_quota, n);
-	}
-	else {
+	else
 		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
-		__RING_STAT_ADD(r, enq_success, n);
-	}
 
 	r->prod.tail = prod_next;
 	return ret;
@@ -647,16 +586,11 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 
 		/* Set the actual entries for dequeue */
 		if (n > entries) {
-			if (behavior == RTE_RING_QUEUE_FIXED) {
-				__RING_STAT_ADD(r, deq_fail, n);
+			if (behavior == RTE_RING_QUEUE_FIXED)
 				return -ENOENT;
-			}
 			else {
-				if (unlikely(entries == 0)){
-					__RING_STAT_ADD(r, deq_fail, n);
+				if (unlikely(entries == 0))
 					return 0;
-				}
-
 				n = entries;
 			}
 		}
@@ -686,7 +620,6 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 			sched_yield();
 		}
 	}
-	__RING_STAT_ADD(r, deq_success, n);
 	r->cons.tail = cons_next;
 
 	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
@@ -733,16 +666,11 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 	entries = prod_tail - cons_head;
 
 	if (n > entries) {
-		if (behavior == RTE_RING_QUEUE_FIXED) {
-			__RING_STAT_ADD(r, deq_fail, n);
+		if (behavior == RTE_RING_QUEUE_FIXED)
 			return -ENOENT;
-		}
 		else {
-			if (unlikely(entries == 0)){
-				__RING_STAT_ADD(r, deq_fail, n);
+			if (unlikely(entries == 0))
 				return 0;
-			}
-
 			n = entries;
 		}
 	}
@@ -754,7 +682,6 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 	DEQUEUE_PTRS();
 	rte_smp_rmb();
 
-	__RING_STAT_ADD(r, deq_success, n);
 	r->cons.tail = cons_next;
 	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 08/19] ring: remove the yield when waiting for tail update
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (9 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 07/19] ring: remove debug setting Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 09/19] ring: remove watermark support Bruce Richardson
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

There was a compile time setting to enable a ring to yield when
it entered a loop in mp or mc rings waiting for the tail pointer update.
Build time settings are not recommended for enabling/disabling features,
and since this was off by default, remove it completely. If needed, a
runtime enabled equivalent can be used.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 config/common_base                              |  1 -
 doc/guides/prog_guide/env_abstraction_layer.rst |  5 ----
 lib/librte_ring/rte_ring.h                      | 35 +++++--------------------
 3 files changed, 6 insertions(+), 35 deletions(-)

diff --git a/config/common_base b/config/common_base
index 3bbe3aa..125ee45 100644
--- a/config/common_base
+++ b/config/common_base
@@ -447,7 +447,6 @@ CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
-CONFIG_RTE_RING_PAUSE_REP_COUNT=0
 
 #
 # Compile librte_mempool
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 10a10a8..7c39cd2 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -352,11 +352,6 @@ Known Issues
 
   3. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
 
-  ``RTE_RING_PAUSE_REP_COUNT`` is defined for rte_ring to reduce contention. It's mainly for case 2, a yield is issued after number of times pause repeat.
-
-  It adds a sched_yield() syscall if the thread spins for too long while waiting on the other thread to finish its operations on the ring.
-  This gives the preempted thread a chance to proceed and finish with the ring enqueue/dequeue operation.
-
 + rte_timer
 
   Running  ``rte_timer_manager()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index c059daa..8f940c6 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -114,11 +114,6 @@ enum rte_ring_queue_behavior {
 #define RTE_RING_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
 			   sizeof(RTE_RING_MZ_PREFIX) + 1)
 
-#ifndef RTE_RING_PAUSE_REP_COUNT
-#define RTE_RING_PAUSE_REP_COUNT 0 /**< Yield after pause num of times, no yield
-                                    *   if RTE_RING_PAUSE_REP not defined. */
-#endif
-
 struct rte_memzone; /* forward declaration, so as not to require memzone.h */
 
 /* structure to hold a pair of head/tail values and other metadata */
@@ -388,7 +383,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	uint32_t cons_tail, free_entries;
 	const unsigned max = n;
 	int success;
-	unsigned i, rep = 0;
+	unsigned int i;
 	uint32_t mask = r->mask;
 	int ret;
 
@@ -442,18 +437,9 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	 * If there are other enqueues in progress that preceded us,
 	 * we need to wait for them to complete
 	 */
-	while (unlikely(r->prod.tail != prod_head)) {
+	while (unlikely(r->prod.tail != prod_head))
 		rte_pause();
 
-		/* Set RTE_RING_PAUSE_REP_COUNT to avoid spin too long waiting
-		 * for other thread finish. It gives pre-empted thread a chance
-		 * to proceed and finish with ring dequeue operation. */
-		if (RTE_RING_PAUSE_REP_COUNT &&
-		    ++rep == RTE_RING_PAUSE_REP_COUNT) {
-			rep = 0;
-			sched_yield();
-		}
-	}
 	r->prod.tail = prod_next;
 	return ret;
 }
@@ -486,7 +472,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 {
 	uint32_t prod_head, cons_tail;
 	uint32_t prod_next, free_entries;
-	unsigned i;
+	unsigned int i;
 	uint32_t mask = r->mask;
 	int ret;
 
@@ -563,7 +549,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 	uint32_t cons_next, entries;
 	const unsigned max = n;
 	int success;
-	unsigned i, rep = 0;
+	unsigned int i;
 	uint32_t mask = r->mask;
 
 	/* Avoid the unnecessary cmpset operation below, which is also
@@ -608,18 +594,9 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 	 * If there are other dequeues in progress that preceded us,
 	 * we need to wait for them to complete
 	 */
-	while (unlikely(r->cons.tail != cons_head)) {
+	while (unlikely(r->cons.tail != cons_head))
 		rte_pause();
 
-		/* Set RTE_RING_PAUSE_REP_COUNT to avoid spin too long waiting
-		 * for other thread finish. It gives pre-empted thread a chance
-		 * to proceed and finish with ring dequeue operation. */
-		if (RTE_RING_PAUSE_REP_COUNT &&
-		    ++rep == RTE_RING_PAUSE_REP_COUNT) {
-			rep = 0;
-			sched_yield();
-		}
-	}
 	r->cons.tail = cons_next;
 
 	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
@@ -654,7 +631,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 {
 	uint32_t cons_head, prod_tail;
 	uint32_t cons_next, entries;
-	unsigned i;
+	unsigned int i;
 	uint32_t mask = r->mask;
 
 	cons_head = r->cons.head;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 09/19] ring: remove watermark support
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (10 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 08/19] ring: remove the yield when waiting for tail update Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 10/19] ring: make bulk and burst fn return vals consistent Bruce Richardson
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Remove the watermark support. A future commit will add support for having
enqueue functions return the amount of free space in the ring, which will
allow applications to implement their own watermark checks, while also
being more useful to the app.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/commands.c        |  52 ----------------
 app/test/test_ring.c       | 149 +--------------------------------------------
 examples/Makefile          |   2 +-
 lib/librte_ring/rte_ring.c |  23 -------
 lib/librte_ring/rte_ring.h |  57 +----------------
 5 files changed, 6 insertions(+), 277 deletions(-)

diff --git a/app/test/commands.c b/app/test/commands.c
index 2df46b0..551c81d 100644
--- a/app/test/commands.c
+++ b/app/test/commands.c
@@ -228,57 +228,6 @@ cmdline_parse_inst_t cmd_dump_one = {
 
 /****************/
 
-struct cmd_set_ring_result {
-	cmdline_fixed_string_t set;
-	cmdline_fixed_string_t name;
-	uint32_t value;
-};
-
-static void cmd_set_ring_parsed(void *parsed_result, struct cmdline *cl,
-				__attribute__((unused)) void *data)
-{
-	struct cmd_set_ring_result *res = parsed_result;
-	struct rte_ring *r;
-	int ret;
-
-	r = rte_ring_lookup(res->name);
-	if (r == NULL) {
-		cmdline_printf(cl, "Cannot find ring\n");
-		return;
-	}
-
-	if (!strcmp(res->set, "set_watermark")) {
-		ret = rte_ring_set_water_mark(r, res->value);
-		if (ret != 0)
-			cmdline_printf(cl, "Cannot set water mark\n");
-	}
-}
-
-cmdline_parse_token_string_t cmd_set_ring_set =
-	TOKEN_STRING_INITIALIZER(struct cmd_set_ring_result, set,
-				 "set_watermark");
-
-cmdline_parse_token_string_t cmd_set_ring_name =
-	TOKEN_STRING_INITIALIZER(struct cmd_set_ring_result, name, NULL);
-
-cmdline_parse_token_num_t cmd_set_ring_value =
-	TOKEN_NUM_INITIALIZER(struct cmd_set_ring_result, value, UINT32);
-
-cmdline_parse_inst_t cmd_set_ring = {
-	.f = cmd_set_ring_parsed,  /* function to call */
-	.data = NULL,      /* 2nd arg of func */
-	.help_str = "set watermark: "
-			"set_watermark <ring_name> <value>",
-	.tokens = {        /* token list, NULL terminated */
-		(void *)&cmd_set_ring_set,
-		(void *)&cmd_set_ring_name,
-		(void *)&cmd_set_ring_value,
-		NULL,
-	},
-};
-
-/****************/
-
 struct cmd_quit_result {
 	cmdline_fixed_string_t quit;
 };
@@ -419,7 +368,6 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_autotest,
 	(cmdline_parse_inst_t *)&cmd_dump,
 	(cmdline_parse_inst_t *)&cmd_dump_one,
-	(cmdline_parse_inst_t *)&cmd_set_ring,
 	(cmdline_parse_inst_t *)&cmd_quit,
 	(cmdline_parse_inst_t *)&cmd_set_rxtx,
 	(cmdline_parse_inst_t *)&cmd_set_rxtx_anchor,
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index 0cf55b5..666a451 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -78,21 +78,6 @@
  *      - Dequeue one object, two objects, MAX_BULK objects
  *      - Check that dequeued pointers are correct
  *
- *    - Test watermark and default bulk enqueue/dequeue:
- *
- *      - Set watermark
- *      - Set default bulk value
- *      - Enqueue objects, check that -EDQUOT is returned when
- *        watermark is exceeded
- *      - Check that dequeued pointers are correct
- *
- * #. Check live watermark change
- *
- *    - Start a loop on another lcore that will enqueue and dequeue
- *      objects in a ring. It will monitor the value of watermark.
- *    - At the same time, change the watermark on the master lcore.
- *    - The slave lcore will check that watermark changes from 16 to 32.
- *
  * #. Performance tests.
  *
  * Tests done in test_ring_perf.c
@@ -115,123 +100,6 @@ static struct rte_ring *r;
 
 #define	TEST_RING_FULL_EMTPY_ITER	8
 
-static int
-check_live_watermark_change(__attribute__((unused)) void *dummy)
-{
-	uint64_t hz = rte_get_timer_hz();
-	void *obj_table[MAX_BULK];
-	unsigned watermark, watermark_old = 16;
-	uint64_t cur_time, end_time;
-	int64_t diff = 0;
-	int i, ret;
-	unsigned count = 4;
-
-	/* init the object table */
-	memset(obj_table, 0, sizeof(obj_table));
-	end_time = rte_get_timer_cycles() + (hz / 4);
-
-	/* check that bulk and watermark are 4 and 32 (respectively) */
-	while (diff >= 0) {
-
-		/* add in ring until we reach watermark */
-		ret = 0;
-		for (i = 0; i < 16; i ++) {
-			if (ret != 0)
-				break;
-			ret = rte_ring_enqueue_bulk(r, obj_table, count);
-		}
-
-		if (ret != -EDQUOT) {
-			printf("Cannot enqueue objects, or watermark not "
-			       "reached (ret=%d)\n", ret);
-			return -1;
-		}
-
-		/* read watermark, the only change allowed is from 16 to 32 */
-		watermark = r->watermark;
-		if (watermark != watermark_old &&
-		    (watermark_old != 16 || watermark != 32)) {
-			printf("Bad watermark change %u -> %u\n", watermark_old,
-			       watermark);
-			return -1;
-		}
-		watermark_old = watermark;
-
-		/* dequeue objects from ring */
-		while (i--) {
-			ret = rte_ring_dequeue_bulk(r, obj_table, count);
-			if (ret != 0) {
-				printf("Cannot dequeue (ret=%d)\n", ret);
-				return -1;
-			}
-		}
-
-		cur_time = rte_get_timer_cycles();
-		diff = end_time - cur_time;
-	}
-
-	if (watermark_old != 32 ) {
-		printf(" watermark was not updated (wm=%u)\n",
-		       watermark_old);
-		return -1;
-	}
-
-	return 0;
-}
-
-static int
-test_live_watermark_change(void)
-{
-	unsigned lcore_id = rte_lcore_id();
-	unsigned lcore_id2 = rte_get_next_lcore(lcore_id, 0, 1);
-
-	printf("Test watermark live modification\n");
-	rte_ring_set_water_mark(r, 16);
-
-	/* launch a thread that will enqueue and dequeue, checking
-	 * watermark and quota */
-	rte_eal_remote_launch(check_live_watermark_change, NULL, lcore_id2);
-
-	rte_delay_ms(100);
-	rte_ring_set_water_mark(r, 32);
-	rte_delay_ms(100);
-
-	if (rte_eal_wait_lcore(lcore_id2) < 0)
-		return -1;
-
-	return 0;
-}
-
-/* Test for catch on invalid watermark values */
-static int
-test_set_watermark( void ){
-	unsigned count;
-	int setwm;
-
-	struct rte_ring *r = rte_ring_lookup("test_ring_basic_ex");
-	if(r == NULL){
-		printf( " ring lookup failed\n" );
-		goto error;
-	}
-	count = r->size*2;
-	setwm = rte_ring_set_water_mark(r, count);
-	if (setwm != -EINVAL){
-		printf("Test failed to detect invalid watermark count value\n");
-		goto error;
-	}
-
-	count = 0;
-	rte_ring_set_water_mark(r, count);
-	if (r->watermark != r->size) {
-		printf("Test failed to detect invalid watermark count value\n");
-		goto error;
-	}
-	return 0;
-
-error:
-	return -1;
-}
-
 /*
  * helper routine for test_ring_basic
  */
@@ -418,8 +286,7 @@ test_ring_basic(void)
 	cur_src = src;
 	cur_dst = dst;
 
-	printf("test watermark and default bulk enqueue / dequeue\n");
-	rte_ring_set_water_mark(r, 20);
+	printf("test default bulk enqueue / dequeue\n");
 	num_elems = 16;
 
 	cur_src = src;
@@ -433,8 +300,8 @@ test_ring_basic(void)
 	}
 	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
 	cur_src += num_elems;
-	if (ret != -EDQUOT) {
-		printf("Watermark not exceeded\n");
+	if (ret != 0) {
+		printf("Cannot enqueue\n");
 		goto fail;
 	}
 	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
@@ -930,16 +797,6 @@ test_ring(void)
 		return -1;
 
 	/* basic operations */
-	if (test_live_watermark_change() < 0)
-		return -1;
-
-	if ( test_set_watermark() < 0){
-		printf ("Test failed to detect invalid parameter\n");
-		return -1;
-	}
-	else
-		printf ( "Test detected forced bad watermark values\n");
-
 	if ( test_create_count_odd() < 0){
 			printf ("Test failed to detect odd count\n");
 			return -1;
diff --git a/examples/Makefile b/examples/Makefile
index da2bfdd..19cd5ad 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -81,7 +81,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_REORDER) += packet_ordering
 DIRS-$(CONFIG_RTE_LIBRTE_IEEE1588) += ptpclient
 DIRS-$(CONFIG_RTE_LIBRTE_METER) += qos_meter
 DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += qos_sched
-DIRS-y += quota_watermark
+#DIRS-y += quota_watermark
 DIRS-$(CONFIG_RTE_ETHDEV_RXTX_CALLBACKS) += rxtx_callbacks
 DIRS-y += skeleton
 ifeq ($(CONFIG_RTE_LIBRTE_HASH),y)
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index 2a04f05..e5af4ed 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -138,7 +138,6 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
 	if (ret < 0 || ret >= (int)sizeof(r->name))
 		return -ENAMETOOLONG;
 	r->flags = flags;
-	r->watermark = count;
 	r->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);
 	r->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);
 	r->size = count;
@@ -256,24 +255,6 @@ rte_ring_free(struct rte_ring *r)
 	rte_free(te);
 }
 
-/*
- * change the high water mark. If *count* is 0, water marking is
- * disabled
- */
-int
-rte_ring_set_water_mark(struct rte_ring *r, unsigned count)
-{
-	if (count >= r->size)
-		return -EINVAL;
-
-	/* if count is 0, disable the watermarking */
-	if (count == 0)
-		count = r->size;
-
-	r->watermark = count;
-	return 0;
-}
-
 /* dump the status of the ring on the console */
 void
 rte_ring_dump(FILE *f, const struct rte_ring *r)
@@ -287,10 +268,6 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
 	fprintf(f, "  ph=%"PRIu32"\n", r->prod.head);
 	fprintf(f, "  used=%u\n", rte_ring_count(r));
 	fprintf(f, "  avail=%u\n", rte_ring_free_count(r));
-	if (r->watermark == r->size)
-		fprintf(f, "  watermark=0\n");
-	else
-		fprintf(f, "  watermark=%"PRIu32"\n", r->watermark);
 }
 
 /* dump the status of all rings on the console */
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 8f940c6..1962b87 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -148,7 +148,6 @@ struct rte_ring {
 			/**< Memzone, if any, containing the rte_ring */
 	uint32_t size;           /**< Size of ring. */
 	uint32_t mask;           /**< Mask (size-1) of ring. */
-	uint32_t watermark;      /**< Max items before EDQUOT in producer. */
 
 	/** Ring producer status. */
 	struct rte_ring_ht_ptr prod __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
@@ -269,26 +268,6 @@ struct rte_ring *rte_ring_create(const char *name, unsigned count,
 void rte_ring_free(struct rte_ring *r);
 
 /**
- * Change the high water mark.
- *
- * If *count* is 0, water marking is disabled. Otherwise, it is set to the
- * *count* value. The *count* value must be greater than 0 and less
- * than the ring size.
- *
- * This function can be called at any time (not necessarily at
- * initialization).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param count
- *   The new water mark value.
- * @return
- *   - 0: Success; water mark changed.
- *   - -EINVAL: Invalid water mark value.
- */
-int rte_ring_set_water_mark(struct rte_ring *r, unsigned count);
-
-/**
  * Dump the status of the ring to a file.
  *
  * @param f
@@ -369,8 +348,6 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  *   Depend on the behavior value
  *   if behavior = RTE_RING_QUEUE_FIXED
  *   - 0: Success; objects enqueue.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
  *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects enqueued.
@@ -385,7 +362,6 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	int success;
 	unsigned int i;
 	uint32_t mask = r->mask;
-	int ret;
 
 	/* Avoid the unnecessary cmpset operation below, which is also
 	 * potentially harmful when n equals 0. */
@@ -426,13 +402,6 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	ENQUEUE_PTRS();
 	rte_smp_wmb();
 
-	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->watermark))
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
-				(int)(n | RTE_RING_QUOT_EXCEED);
-	else
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
-
 	/*
 	 * If there are other enqueues in progress that preceded us,
 	 * we need to wait for them to complete
@@ -441,7 +410,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 		rte_pause();
 
 	r->prod.tail = prod_next;
-	return ret;
+	return (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
 }
 
 /**
@@ -460,8 +429,6 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
  *   Depend on the behavior value
  *   if behavior = RTE_RING_QUEUE_FIXED
  *   - 0: Success; objects enqueue.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
  *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects enqueued.
@@ -474,7 +441,6 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	uint32_t prod_next, free_entries;
 	unsigned int i;
 	uint32_t mask = r->mask;
-	int ret;
 
 	prod_head = r->prod.head;
 	cons_tail = r->cons.tail;
@@ -503,15 +469,8 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	ENQUEUE_PTRS();
 	rte_smp_wmb();
 
-	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->watermark))
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
-			(int)(n | RTE_RING_QUOT_EXCEED);
-	else
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
-
 	r->prod.tail = prod_next;
-	return ret;
+	return (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
 }
 
 /**
@@ -677,8 +636,6 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
  *   The number of objects to add in the ring from the obj_table.
  * @return
  *   - 0: Success; objects enqueue.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
  */
 static inline int __attribute__((always_inline))
@@ -699,8 +656,6 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  *   The number of objects to add in the ring from the obj_table.
  * @return
  *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
  */
 static inline int __attribute__((always_inline))
@@ -725,8 +680,6 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  *   The number of objects to add in the ring from the obj_table.
  * @return
  *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
  */
 static inline int __attribute__((always_inline))
@@ -751,8 +704,6 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  *   A pointer to the object to be added.
  * @return
  *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
  */
 static inline int __attribute__((always_inline))
@@ -770,8 +721,6 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
  *   A pointer to the object to be added.
  * @return
  *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
  */
 static inline int __attribute__((always_inline))
@@ -793,8 +742,6 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
  *   A pointer to the object to be added.
  * @return
  *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
  *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
  */
 static inline int __attribute__((always_inline))
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 10/19] ring: make bulk and burst fn return vals consistent
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (11 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 09/19] ring: remove watermark support Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 11/19] ring: allow enq fns to return free space value Bruce Richardson
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

The bulk fns for rings returns 0 for all elements enqueued and negative
for no space. Change that to make them consistent with the burst functions
in returning the number of elements enqueued/dequeued, i.e. 0 or N.
This change also allows the return value from enq/deq to be used directly
without a branch for error checking.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test-pipeline/pipeline_hash.c                  |  2 +-
 app/test-pipeline/runtime.c                        |  8 +--
 app/test/test_ring.c                               | 44 ++++++-------
 app/test/test_ring_perf.c                          |  8 +--
 doc/guides/sample_app_ug/server_node_efd.rst       |  2 +-
 examples/load_balancer/runtime.c                   | 16 ++---
 .../client_server_mp/mp_client/client.c            |  8 +--
 .../client_server_mp/mp_server/main.c              |  2 +-
 examples/qos_sched/app_thread.c                    |  8 ++-
 examples/server_node_efd/node/node.c               |  2 +-
 examples/server_node_efd/server/main.c             |  2 +-
 lib/librte_mempool/rte_mempool_ring.c              | 12 ++--
 lib/librte_ring/rte_ring.h                         | 75 +++++++++-------------
 13 files changed, 87 insertions(+), 102 deletions(-)

diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 10d2869..1ac0aa8 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -547,6 +547,6 @@ app_main_loop_rx_metadata(void) {
 				app.rings_rx[i],
 				(void **) app.mbuf_rx.array,
 				n_mbufs);
-		} while (ret < 0);
+		} while (ret == 0);
 	}
 }
diff --git a/app/test-pipeline/runtime.c b/app/test-pipeline/runtime.c
index 42a6142..4e20669 100644
--- a/app/test-pipeline/runtime.c
+++ b/app/test-pipeline/runtime.c
@@ -98,7 +98,7 @@ app_main_loop_rx(void) {
 				app.rings_rx[i],
 				(void **) app.mbuf_rx.array,
 				n_mbufs);
-		} while (ret < 0);
+		} while (ret == 0);
 	}
 }
 
@@ -123,7 +123,7 @@ app_main_loop_worker(void) {
 			(void **) worker_mbuf->array,
 			app.burst_size_worker_read);
 
-		if (ret == -ENOENT)
+		if (ret == 0)
 			continue;
 
 		do {
@@ -131,7 +131,7 @@ app_main_loop_worker(void) {
 				app.rings_tx[i ^ 1],
 				(void **) worker_mbuf->array,
 				app.burst_size_worker_write);
-		} while (ret < 0);
+		} while (ret == 0);
 	}
 }
 
@@ -152,7 +152,7 @@ app_main_loop_tx(void) {
 			(void **) &app.mbuf_tx[i].array[n_mbufs],
 			app.burst_size_tx_read);
 
-		if (ret == -ENOENT)
+		if (ret == 0)
 			continue;
 
 		n_mbufs += app.burst_size_tx_read;
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index 666a451..4378fd0 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -117,12 +117,12 @@ test_ring_basic_full_empty(void * const src[], void *dst[])
 		rand = RTE_MAX(rte_rand() % RING_SIZE, 1UL);
 		printf("%s: iteration %u, random shift: %u;\n",
 		    __func__, i, rand);
-		TEST_RING_VERIFY(-ENOBUFS != rte_ring_enqueue_bulk(r, src,
+		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src,
 		    rand));
-		TEST_RING_VERIFY(0 == rte_ring_dequeue_bulk(r, dst, rand));
+		TEST_RING_VERIFY(rand == rte_ring_dequeue_bulk(r, dst, rand));
 
 		/* fill the ring */
-		TEST_RING_VERIFY(-ENOBUFS != rte_ring_enqueue_bulk(r, src,
+		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src,
 		    rsz));
 		TEST_RING_VERIFY(0 == rte_ring_free_count(r));
 		TEST_RING_VERIFY(rsz == rte_ring_count(r));
@@ -130,7 +130,7 @@ test_ring_basic_full_empty(void * const src[], void *dst[])
 		TEST_RING_VERIFY(0 == rte_ring_empty(r));
 
 		/* empty the ring */
-		TEST_RING_VERIFY(0 == rte_ring_dequeue_bulk(r, dst, rsz));
+		TEST_RING_VERIFY(rsz == rte_ring_dequeue_bulk(r, dst, rsz));
 		TEST_RING_VERIFY(rsz == rte_ring_free_count(r));
 		TEST_RING_VERIFY(0 == rte_ring_count(r));
 		TEST_RING_VERIFY(0 == rte_ring_full(r));
@@ -171,37 +171,37 @@ test_ring_basic(void)
 	printf("enqueue 1 obj\n");
 	ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1);
 	cur_src += 1;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("enqueue 2 objs\n");
 	ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2);
 	cur_src += 2;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("enqueue MAX_BULK objs\n");
 	ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK);
 	cur_src += MAX_BULK;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("dequeue 1 obj\n");
 	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1);
 	cur_dst += 1;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("dequeue 2 objs\n");
 	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2);
 	cur_dst += 2;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("dequeue MAX_BULK objs\n");
 	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK);
 	cur_dst += MAX_BULK;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	/* check data */
@@ -217,37 +217,37 @@ test_ring_basic(void)
 	printf("enqueue 1 obj\n");
 	ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1);
 	cur_src += 1;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("enqueue 2 objs\n");
 	ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2);
 	cur_src += 2;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("enqueue MAX_BULK objs\n");
 	ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK);
 	cur_src += MAX_BULK;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("dequeue 1 obj\n");
 	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1);
 	cur_dst += 1;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("dequeue 2 objs\n");
 	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2);
 	cur_dst += 2;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	printf("dequeue MAX_BULK objs\n");
 	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK);
 	cur_dst += MAX_BULK;
-	if (ret != 0)
+	if (ret == 0)
 		goto fail;
 
 	/* check data */
@@ -264,11 +264,11 @@ test_ring_basic(void)
 	for (i = 0; i<RING_SIZE/MAX_BULK; i++) {
 		ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK);
 		cur_src += MAX_BULK;
-		if (ret != 0)
+		if (ret == 0)
 			goto fail;
 		ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK);
 		cur_dst += MAX_BULK;
-		if (ret != 0)
+		if (ret == 0)
 			goto fail;
 	}
 
@@ -294,25 +294,25 @@ test_ring_basic(void)
 
 	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
 	cur_src += num_elems;
-	if (ret != 0) {
+	if (ret == 0) {
 		printf("Cannot enqueue\n");
 		goto fail;
 	}
 	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
 	cur_src += num_elems;
-	if (ret != 0) {
+	if (ret == 0) {
 		printf("Cannot enqueue\n");
 		goto fail;
 	}
 	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
 	cur_dst += num_elems;
-	if (ret != 0) {
+	if (ret == 0) {
 		printf("Cannot dequeue\n");
 		goto fail;
 	}
 	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
 	cur_dst += num_elems;
-	if (ret != 0) {
+	if (ret == 0) {
 		printf("Cannot dequeue2\n");
 		goto fail;
 	}
diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c
index 320c20c..8ccbdef 100644
--- a/app/test/test_ring_perf.c
+++ b/app/test/test_ring_perf.c
@@ -195,13 +195,13 @@ enqueue_bulk(void *p)
 
 	const uint64_t sp_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_sp_enqueue_bulk(r, burst, size) != 0)
+		while (rte_ring_sp_enqueue_bulk(r, burst, size) == 0)
 			rte_pause();
 	const uint64_t sp_end = rte_rdtsc();
 
 	const uint64_t mp_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_mp_enqueue_bulk(r, burst, size) != 0)
+		while (rte_ring_mp_enqueue_bulk(r, burst, size) == 0)
 			rte_pause();
 	const uint64_t mp_end = rte_rdtsc();
 
@@ -230,13 +230,13 @@ dequeue_bulk(void *p)
 
 	const uint64_t sc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_sc_dequeue_bulk(r, burst, size) != 0)
+		while (rte_ring_sc_dequeue_bulk(r, burst, size) == 0)
 			rte_pause();
 	const uint64_t sc_end = rte_rdtsc();
 
 	const uint64_t mc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_mc_dequeue_bulk(r, burst, size) != 0)
+		while (rte_ring_mc_dequeue_bulk(r, burst, size) == 0)
 			rte_pause();
 	const uint64_t mc_end = rte_rdtsc();
 
diff --git a/doc/guides/sample_app_ug/server_node_efd.rst b/doc/guides/sample_app_ug/server_node_efd.rst
index 9b69cfe..e3a63c8 100644
--- a/doc/guides/sample_app_ug/server_node_efd.rst
+++ b/doc/guides/sample_app_ug/server_node_efd.rst
@@ -286,7 +286,7 @@ repeated infinitely.
 
         cl = &nodes[node];
         if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buffer,
-                cl_rx_buf[node].count) != 0){
+                cl_rx_buf[node].count) != cl_rx_buf[node].count){
             for (j = 0; j < cl_rx_buf[node].count; j++)
                 rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);
             cl->stats.rx_drop += cl_rx_buf[node].count;
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index 6944325..82b10bc 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -146,7 +146,7 @@ app_lcore_io_rx_buffer_to_send (
 		(void **) lp->rx.mbuf_out[worker].array,
 		bsz);
 
-	if (unlikely(ret == -ENOBUFS)) {
+	if (unlikely(ret == 0)) {
 		uint32_t k;
 		for (k = 0; k < bsz; k ++) {
 			struct rte_mbuf *m = lp->rx.mbuf_out[worker].array[k];
@@ -312,7 +312,7 @@ app_lcore_io_rx_flush(struct app_lcore_params_io *lp, uint32_t n_workers)
 			(void **) lp->rx.mbuf_out[worker].array,
 			lp->rx.mbuf_out[worker].n_mbufs);
 
-		if (unlikely(ret < 0)) {
+		if (unlikely(ret == 0)) {
 			uint32_t k;
 			for (k = 0; k < lp->rx.mbuf_out[worker].n_mbufs; k ++) {
 				struct rte_mbuf *pkt_to_free = lp->rx.mbuf_out[worker].array[k];
@@ -349,9 +349,8 @@ app_lcore_io_tx(
 				(void **) &lp->tx.mbuf_out[port].array[n_mbufs],
 				bsz_rd);
 
-			if (unlikely(ret == -ENOENT)) {
+			if (unlikely(ret == 0))
 				continue;
-			}
 
 			n_mbufs += bsz_rd;
 
@@ -505,9 +504,8 @@ app_lcore_worker(
 			(void **) lp->mbuf_in.array,
 			bsz_rd);
 
-		if (unlikely(ret == -ENOENT)) {
+		if (unlikely(ret == 0))
 			continue;
-		}
 
 #if APP_WORKER_DROP_ALL_PACKETS
 		for (j = 0; j < bsz_rd; j ++) {
@@ -559,7 +557,7 @@ app_lcore_worker(
 
 #if APP_STATS
 			lp->rings_out_iters[port] ++;
-			if (ret == 0) {
+			if (ret > 0) {
 				lp->rings_out_count[port] += 1;
 			}
 			if (lp->rings_out_iters[port] == APP_STATS){
@@ -572,7 +570,7 @@ app_lcore_worker(
 			}
 #endif
 
-			if (unlikely(ret == -ENOBUFS)) {
+			if (unlikely(ret == 0)) {
 				uint32_t k;
 				for (k = 0; k < bsz_wr; k ++) {
 					struct rte_mbuf *pkt_to_free = lp->mbuf_out[port].array[k];
@@ -609,7 +607,7 @@ app_lcore_worker_flush(struct app_lcore_params_worker *lp)
 			(void **) lp->mbuf_out[port].array,
 			lp->mbuf_out[port].n_mbufs);
 
-		if (unlikely(ret < 0)) {
+		if (unlikely(ret == 0)) {
 			uint32_t k;
 			for (k = 0; k < lp->mbuf_out[port].n_mbufs; k ++) {
 				struct rte_mbuf *pkt_to_free = lp->mbuf_out[port].array[k];
diff --git a/examples/multi_process/client_server_mp/mp_client/client.c b/examples/multi_process/client_server_mp/mp_client/client.c
index d4f9ca3..dca9eb9 100644
--- a/examples/multi_process/client_server_mp/mp_client/client.c
+++ b/examples/multi_process/client_server_mp/mp_client/client.c
@@ -276,14 +276,10 @@ main(int argc, char *argv[])
 	printf("[Press Ctrl-C to quit ...]\n");
 
 	for (;;) {
-		uint16_t i, rx_pkts = PKT_READ_SIZE;
+		uint16_t i, rx_pkts;
 		uint8_t port;
 
-		/* try dequeuing max possible packets first, if that fails, get the
-		 * most we can. Loop body should only execute once, maximum */
-		while (rx_pkts > 0 &&
-				unlikely(rte_ring_dequeue_bulk(rx_ring, pkts, rx_pkts) != 0))
-			rx_pkts = (uint16_t)RTE_MIN(rte_ring_count(rx_ring), PKT_READ_SIZE);
+		rx_pkts = rte_ring_dequeue_burst(rx_ring, pkts, PKT_READ_SIZE);
 
 		if (unlikely(rx_pkts == 0)){
 			if (need_flush)
diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index a6dc12d..19c95b2 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -227,7 +227,7 @@ flush_rx_queue(uint16_t client)
 
 	cl = &clients[client];
 	if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[client].buffer,
-			cl_rx_buf[client].count) != 0){
+			cl_rx_buf[client].count) == 0){
 		for (j = 0; j < cl_rx_buf[client].count; j++)
 			rte_pktmbuf_free(cl_rx_buf[client].buffer[j]);
 		cl->stats.rx_drop += cl_rx_buf[client].count;
diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index 70fdcdb..dab4594 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -107,7 +107,7 @@ app_rx_thread(struct thread_conf **confs)
 			}
 
 			if (unlikely(rte_ring_sp_enqueue_bulk(conf->rx_ring,
-								(void **)rx_mbufs, nb_rx) != 0)) {
+					(void **)rx_mbufs, nb_rx) == 0)) {
 				for(i = 0; i < nb_rx; i++) {
 					rte_pktmbuf_free(rx_mbufs[i]);
 
@@ -180,7 +180,7 @@ app_tx_thread(struct thread_conf **confs)
 	while ((conf = confs[conf_idx])) {
 		retval = rte_ring_sc_dequeue_bulk(conf->tx_ring, (void **)mbufs,
 					burst_conf.qos_dequeue);
-		if (likely(retval == 0)) {
+		if (likely(retval != 0)) {
 			app_send_packets(conf, mbufs, burst_conf.qos_dequeue);
 
 			conf->counter = 0; /* reset empty read loop counter */
@@ -230,7 +230,9 @@ app_worker_thread(struct thread_conf **confs)
 		nb_pkt = rte_sched_port_dequeue(conf->sched_port, mbufs,
 					burst_conf.qos_dequeue);
 		if (likely(nb_pkt > 0))
-			while (rte_ring_sp_enqueue_bulk(conf->tx_ring, (void **)mbufs, nb_pkt) != 0);
+			while (rte_ring_sp_enqueue_bulk(conf->tx_ring,
+					(void **)mbufs, nb_pkt) == 0)
+				; /* empty body */
 
 		conf_idx++;
 		if (confs[conf_idx] == NULL)
diff --git a/examples/server_node_efd/node/node.c b/examples/server_node_efd/node/node.c
index a6c0c70..9ec6a05 100644
--- a/examples/server_node_efd/node/node.c
+++ b/examples/server_node_efd/node/node.c
@@ -392,7 +392,7 @@ main(int argc, char *argv[])
 		 */
 		while (rx_pkts > 0 &&
 				unlikely(rte_ring_dequeue_bulk(rx_ring, pkts,
-					rx_pkts) != 0))
+					rx_pkts) == 0))
 			rx_pkts = (uint16_t)RTE_MIN(rte_ring_count(rx_ring),
 					PKT_READ_SIZE);
 
diff --git a/examples/server_node_efd/server/main.c b/examples/server_node_efd/server/main.c
index 1a54d1b..3eb7fac 100644
--- a/examples/server_node_efd/server/main.c
+++ b/examples/server_node_efd/server/main.c
@@ -247,7 +247,7 @@ flush_rx_queue(uint16_t node)
 
 	cl = &nodes[node];
 	if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buffer,
-			cl_rx_buf[node].count) != 0){
+			cl_rx_buf[node].count) != cl_rx_buf[node].count){
 		for (j = 0; j < cl_rx_buf[node].count; j++)
 			rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);
 		cl->stats.rx_drop += cl_rx_buf[node].count;
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
index b9aa64d..409b860 100644
--- a/lib/librte_mempool/rte_mempool_ring.c
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -42,26 +42,30 @@ static int
 common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		unsigned n)
 {
-	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+	return rte_ring_mp_enqueue_bulk(mp->pool_data,
+			obj_table, n) == 0 ? -ENOBUFS : 0;
 }
 
 static int
 common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		unsigned n)
 {
-	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+	return rte_ring_sp_enqueue_bulk(mp->pool_data,
+			obj_table, n) == 0 ? -ENOBUFS : 0;
 }
 
 static int
 common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
-	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+	return rte_ring_mc_dequeue_bulk(mp->pool_data,
+			obj_table, n) == 0 ? -ENOBUFS : 0;
 }
 
 static int
 common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
-	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+	return rte_ring_sc_dequeue_bulk(mp->pool_data,
+			obj_table, n) == 0 ? -ENOBUFS : 0;
 }
 
 static unsigned
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 1962b87..d4d44ce 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -352,7 +352,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects enqueued.
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 			 unsigned n, enum rte_ring_queue_behavior behavior)
 {
@@ -384,7 +384,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 		/* check that we have enough room in ring */
 		if (unlikely(n > free_entries)) {
 			if (behavior == RTE_RING_QUEUE_FIXED)
-				return -ENOBUFS;
+				return 0;
 			else {
 				/* No free entry available */
 				if (unlikely(free_entries == 0))
@@ -410,7 +410,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 		rte_pause();
 
 	r->prod.tail = prod_next;
-	return (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
+	return n;
 }
 
 /**
@@ -433,7 +433,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
  *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects enqueued.
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 			 unsigned n, enum rte_ring_queue_behavior behavior)
 {
@@ -453,7 +453,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	/* check that we have enough room in ring */
 	if (unlikely(n > free_entries)) {
 		if (behavior == RTE_RING_QUEUE_FIXED)
-			return -ENOBUFS;
+			return 0;
 		else {
 			/* No free entry available */
 			if (unlikely(free_entries == 0))
@@ -470,7 +470,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	rte_smp_wmb();
 
 	r->prod.tail = prod_next;
-	return (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
+	return n;
 }
 
 /**
@@ -500,7 +500,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
  *   - n: Actual number of objects dequeued.
  */
 
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 		 unsigned n, enum rte_ring_queue_behavior behavior)
 {
@@ -532,7 +532,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 		/* Set the actual entries for dequeue */
 		if (n > entries) {
 			if (behavior == RTE_RING_QUEUE_FIXED)
-				return -ENOENT;
+				return 0;
 			else {
 				if (unlikely(entries == 0))
 					return 0;
@@ -558,7 +558,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 
 	r->cons.tail = cons_next;
 
-	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
+	return n;
 }
 
 /**
@@ -584,7 +584,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
  *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects dequeued.
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 		 unsigned n, enum rte_ring_queue_behavior behavior)
 {
@@ -603,7 +603,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 
 	if (n > entries) {
 		if (behavior == RTE_RING_QUEUE_FIXED)
-			return -ENOENT;
+			return 0;
 		else {
 			if (unlikely(entries == 0))
 				return 0;
@@ -619,7 +619,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 	rte_smp_rmb();
 
 	r->cons.tail = cons_next;
-	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
+	return n;
 }
 
 /**
@@ -635,10 +635,9 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
  * @param n
  *   The number of objects to add in the ring from the obj_table.
  * @return
- *   - 0: Success; objects enqueue.
- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
+ *   The number of objects enqueued, either 0 or n
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 			 unsigned n)
 {
@@ -655,10 +654,9 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  * @param n
  *   The number of objects to add in the ring from the obj_table.
  * @return
- *   - 0: Success; objects enqueued.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ *   The number of objects enqueued, either 0 or n
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 			 unsigned n)
 {
@@ -679,10 +677,9 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  * @param n
  *   The number of objects to add in the ring from the obj_table.
  * @return
- *   - 0: Success; objects enqueued.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ *   The number of objects enqueued, either 0 or n
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 		      unsigned n)
 {
@@ -709,7 +706,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 static inline int __attribute__((always_inline))
 rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
 {
-	return rte_ring_mp_enqueue_bulk(r, &obj, 1);
+	return rte_ring_mp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -726,7 +723,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
 static inline int __attribute__((always_inline))
 rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
 {
-	return rte_ring_sp_enqueue_bulk(r, &obj, 1);
+	return rte_ring_sp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -747,10 +744,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
 static inline int __attribute__((always_inline))
 rte_ring_enqueue(struct rte_ring *r, void *obj)
 {
-	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue(r, obj);
-	else
-		return rte_ring_mp_enqueue(r, obj);
+	return rte_ring_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -766,11 +760,9 @@ rte_ring_enqueue(struct rte_ring *r, void *obj)
  * @param n
  *   The number of objects to dequeue from the ring to the obj_table.
  * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
+ *   The number of objects dequeued, either 0 or n
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
 {
 	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
@@ -787,11 +779,9 @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
  *   The number of objects to dequeue from the ring to the obj_table,
  *   must be strictly positive.
  * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
+ *   The number of objects dequeued, either 0 or n
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
 {
 	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
@@ -811,11 +801,9 @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
  * @param n
  *   The number of objects to dequeue from the ring to the obj_table.
  * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
- *     dequeued.
+ *   The number of objects dequeued, either 0 or n
  */
-static inline int __attribute__((always_inline))
+static inline unsigned int __attribute__((always_inline))
 rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
 {
 	if (r->cons.sc_dequeue)
@@ -842,7 +830,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
 static inline int __attribute__((always_inline))
 rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
 {
-	return rte_ring_mc_dequeue_bulk(r, obj_p, 1);
+	return rte_ring_mc_dequeue_bulk(r, obj_p, 1)  ? 0 : -ENOBUFS;
 }
 
 /**
@@ -860,7 +848,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
 static inline int __attribute__((always_inline))
 rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
 {
-	return rte_ring_sc_dequeue_bulk(r, obj_p, 1);
+	return rte_ring_sc_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -882,10 +870,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
 static inline int __attribute__((always_inline))
 rte_ring_dequeue(struct rte_ring *r, void **obj_p)
 {
-	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue(r, obj_p);
-	else
-		return rte_ring_mc_dequeue(r, obj_p);
+	return rte_ring_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
 }
 
 /**
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 11/19] ring: allow enq fns to return free space value
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (12 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 10/19] ring: make bulk and burst fn return vals consistent Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 12/19] examples/quota_watermark: use ring space for watermarks Bruce Richardson
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Add an extra parameter to the ring enqueue burst/bulk functions so that
those functions can optionally return the amount of free space in the
ring. This information can be used by applications in a number of ways,
for instance, with single-producer queues, it provides a max
enqueue size which is guaranteed to work. It can also be used to
implement watermark functionality in apps, replacing the older
functionality with a more flexible version, which enables apps to
implement multiple watermark thresholds, rather than just one.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test-pipeline/pipeline_hash.c                  |  3 +-
 app/test-pipeline/runtime.c                        |  5 +-
 app/test/test_link_bonding_mode4.c                 |  3 +-
 app/test/test_pmd_ring_perf.c                      |  5 +-
 app/test/test_ring.c                               | 55 +++++++------
 app/test/test_ring_perf.c                          | 16 ++--
 app/test/test_table_ports.c                        |  4 +-
 app/test/virtual_pmd.c                             |  4 +-
 drivers/net/ring/rte_eth_ring.c                    |  2 +-
 examples/distributor/main.c                        |  3 +-
 examples/load_balancer/runtime.c                   | 12 ++-
 .../client_server_mp/mp_server/main.c              |  2 +-
 examples/packet_ordering/main.c                    |  7 +-
 examples/qos_sched/app_thread.c                    |  4 +-
 examples/server_node_efd/server/main.c             |  2 +-
 lib/librte_hash/rte_cuckoo_hash.c                  |  2 +-
 lib/librte_mempool/rte_mempool_ring.c              |  4 +-
 lib/librte_pdump/rte_pdump.c                       |  2 +-
 lib/librte_port/rte_port_ras.c                     |  2 +-
 lib/librte_port/rte_port_ring.c                    | 28 ++++---
 lib/librte_ring/rte_ring.h                         | 89 +++++++++++-----------
 21 files changed, 135 insertions(+), 119 deletions(-)

diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 1ac0aa8..0c6e04f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -546,7 +546,8 @@ app_main_loop_rx_metadata(void) {
 			ret = rte_ring_sp_enqueue_bulk(
 				app.rings_rx[i],
 				(void **) app.mbuf_rx.array,
-				n_mbufs);
+				n_mbufs,
+				NULL);
 		} while (ret == 0);
 	}
 }
diff --git a/app/test-pipeline/runtime.c b/app/test-pipeline/runtime.c
index 4e20669..c06ff54 100644
--- a/app/test-pipeline/runtime.c
+++ b/app/test-pipeline/runtime.c
@@ -97,7 +97,7 @@ app_main_loop_rx(void) {
 			ret = rte_ring_sp_enqueue_bulk(
 				app.rings_rx[i],
 				(void **) app.mbuf_rx.array,
-				n_mbufs);
+				n_mbufs, NULL);
 		} while (ret == 0);
 	}
 }
@@ -130,7 +130,8 @@ app_main_loop_worker(void) {
 			ret = rte_ring_sp_enqueue_bulk(
 				app.rings_tx[i ^ 1],
 				(void **) worker_mbuf->array,
-				app.burst_size_worker_write);
+				app.burst_size_worker_write,
+				NULL);
 		} while (ret == 0);
 	}
 }
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 53caa3e..8df28b4 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -206,7 +206,8 @@ slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
 static int
 slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf, size);
+	return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+			size, NULL);
 }
 
 static uint16_t
diff --git a/app/test/test_pmd_ring_perf.c b/app/test/test_pmd_ring_perf.c
index af011f7..045a7f2 100644
--- a/app/test/test_pmd_ring_perf.c
+++ b/app/test/test_pmd_ring_perf.c
@@ -98,7 +98,7 @@ test_single_enqueue_dequeue(void)
 	const uint64_t sc_start = rte_rdtsc_precise();
 	rte_compiler_barrier();
 	for (i = 0; i < iterations; i++) {
-		rte_ring_enqueue_bulk(r, &burst, 1);
+		rte_ring_enqueue_bulk(r, &burst, 1, NULL);
 		rte_ring_dequeue_bulk(r, &burst, 1);
 	}
 	const uint64_t sc_end = rte_rdtsc_precise();
@@ -131,7 +131,8 @@ test_bulk_enqueue_dequeue(void)
 	for (sz = 0; sz < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); sz++) {
 		const uint64_t sc_start = rte_rdtsc();
 		for (i = 0; i < iterations; i++) {
-			rte_ring_sp_enqueue_bulk(r, (void *)burst, bulk_sizes[sz]);
+			rte_ring_sp_enqueue_bulk(r, (void *)burst,
+					bulk_sizes[sz], NULL);
 			rte_ring_sc_dequeue_bulk(r, (void *)burst, bulk_sizes[sz]);
 		}
 		const uint64_t sc_end = rte_rdtsc();
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index 4378fd0..aa2a711 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -118,12 +118,11 @@ test_ring_basic_full_empty(void * const src[], void *dst[])
 		printf("%s: iteration %u, random shift: %u;\n",
 		    __func__, i, rand);
 		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src,
-		    rand));
+				rand, NULL));
 		TEST_RING_VERIFY(rand == rte_ring_dequeue_bulk(r, dst, rand));
 
 		/* fill the ring */
-		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src,
-		    rsz));
+		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src, rsz, NULL));
 		TEST_RING_VERIFY(0 == rte_ring_free_count(r));
 		TEST_RING_VERIFY(rsz == rte_ring_count(r));
 		TEST_RING_VERIFY(rte_ring_full(r));
@@ -169,19 +168,19 @@ test_ring_basic(void)
 	cur_dst = dst;
 
 	printf("enqueue 1 obj\n");
-	ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1);
+	ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1, NULL);
 	cur_src += 1;
 	if (ret == 0)
 		goto fail;
 
 	printf("enqueue 2 objs\n");
-	ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2);
+	ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if (ret == 0)
 		goto fail;
 
 	printf("enqueue MAX_BULK objs\n");
-	ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK);
+	ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK, NULL);
 	cur_src += MAX_BULK;
 	if (ret == 0)
 		goto fail;
@@ -215,19 +214,19 @@ test_ring_basic(void)
 	cur_dst = dst;
 
 	printf("enqueue 1 obj\n");
-	ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1);
+	ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1, NULL);
 	cur_src += 1;
 	if (ret == 0)
 		goto fail;
 
 	printf("enqueue 2 objs\n");
-	ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2);
+	ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if (ret == 0)
 		goto fail;
 
 	printf("enqueue MAX_BULK objs\n");
-	ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK);
+	ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK, NULL);
 	cur_src += MAX_BULK;
 	if (ret == 0)
 		goto fail;
@@ -262,7 +261,7 @@ test_ring_basic(void)
 
 	printf("fill and empty the ring\n");
 	for (i = 0; i<RING_SIZE/MAX_BULK; i++) {
-		ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK);
+		ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK, NULL);
 		cur_src += MAX_BULK;
 		if (ret == 0)
 			goto fail;
@@ -292,13 +291,13 @@ test_ring_basic(void)
 	cur_src = src;
 	cur_dst = dst;
 
-	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
+	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems, NULL);
 	cur_src += num_elems;
 	if (ret == 0) {
 		printf("Cannot enqueue\n");
 		goto fail;
 	}
-	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
+	ret = rte_ring_enqueue_bulk(r, cur_src, num_elems, NULL);
 	cur_src += num_elems;
 	if (ret == 0) {
 		printf("Cannot enqueue\n");
@@ -373,19 +372,19 @@ test_ring_burst_basic(void)
 
 	printf("Test SP & SC basic functions \n");
 	printf("enqueue 1 obj\n");
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, 1);
+	ret = rte_ring_sp_enqueue_burst(r, cur_src, 1, NULL);
 	cur_src += 1;
 	if ((ret & RTE_RING_SZ_MASK) != 1)
 		goto fail;
 
 	printf("enqueue 2 objs\n");
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, 2);
+	ret = rte_ring_sp_enqueue_burst(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
 	printf("enqueue MAX_BULK objs\n");
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK) ;
+	ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 	cur_src += MAX_BULK;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 		goto fail;
@@ -421,7 +420,7 @@ test_ring_burst_basic(void)
 
 	printf("Test enqueue without enough memory space \n");
 	for (i = 0; i< (RING_SIZE/MAX_BULK - 1); i++) {
-		ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK);
+		ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 		cur_src += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK) {
 			goto fail;
@@ -429,14 +428,14 @@ test_ring_burst_basic(void)
 	}
 
 	printf("Enqueue 2 objects, free entries = MAX_BULK - 2  \n");
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, 2);
+	ret = rte_ring_sp_enqueue_burst(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
 	printf("Enqueue the remaining entries = MAX_BULK - 2  \n");
 	/* Always one free entry left */
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK);
+	ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 	cur_src += MAX_BULK - 3;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK - 3)
 		goto fail;
@@ -446,7 +445,7 @@ test_ring_burst_basic(void)
 		goto fail;
 
 	printf("Test enqueue for a full entry  \n");
-	ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK);
+	ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 	if ((ret & RTE_RING_SZ_MASK) != 0)
 		goto fail;
 
@@ -488,19 +487,19 @@ test_ring_burst_basic(void)
 	printf("Test MP & MC basic functions \n");
 
 	printf("enqueue 1 obj\n");
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, 1);
+	ret = rte_ring_mp_enqueue_burst(r, cur_src, 1, NULL);
 	cur_src += 1;
 	if ((ret & RTE_RING_SZ_MASK) != 1)
 		goto fail;
 
 	printf("enqueue 2 objs\n");
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, 2);
+	ret = rte_ring_mp_enqueue_burst(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
 	printf("enqueue MAX_BULK objs\n");
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK);
+	ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 	cur_src += MAX_BULK;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 		goto fail;
@@ -536,7 +535,7 @@ test_ring_burst_basic(void)
 
 	printf("fill and empty the ring\n");
 	for (i = 0; i<RING_SIZE/MAX_BULK; i++) {
-		ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK);
+		ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 		cur_src += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 			goto fail;
@@ -559,19 +558,19 @@ test_ring_burst_basic(void)
 
 	printf("Test enqueue without enough memory space \n");
 	for (i = 0; i<RING_SIZE/MAX_BULK - 1; i++) {
-		ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK);
+		ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 		cur_src += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 			goto fail;
 	}
 
 	/* Available memory space for the exact MAX_BULK objects */
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, 2);
+	ret = rte_ring_mp_enqueue_burst(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
-	ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK);
+	ret = rte_ring_mp_enqueue_burst(r, cur_src, MAX_BULK, NULL);
 	cur_src += MAX_BULK - 3;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK - 3)
 		goto fail;
@@ -609,7 +608,7 @@ test_ring_burst_basic(void)
 
 	printf("Covering rte_ring_enqueue_burst functions \n");
 
-	ret = rte_ring_enqueue_burst(r, cur_src, 2);
+	ret = rte_ring_enqueue_burst(r, cur_src, 2, NULL);
 	cur_src += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
@@ -748,7 +747,7 @@ test_ring_basic_ex(void)
 	}
 
 	/* Covering the ring burst operation */
-	ret = rte_ring_enqueue_burst(rp, obj, 2);
+	ret = rte_ring_enqueue_burst(rp, obj, 2, NULL);
 	if ((ret & RTE_RING_SZ_MASK) != 2) {
 		printf("test_ring_basic_ex: rte_ring_enqueue_burst fails \n");
 		goto fail_test;
diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c
index 8ccbdef..f95a8e9 100644
--- a/app/test/test_ring_perf.c
+++ b/app/test/test_ring_perf.c
@@ -195,13 +195,13 @@ enqueue_bulk(void *p)
 
 	const uint64_t sp_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_sp_enqueue_bulk(r, burst, size) == 0)
+		while (rte_ring_sp_enqueue_bulk(r, burst, size, NULL) == 0)
 			rte_pause();
 	const uint64_t sp_end = rte_rdtsc();
 
 	const uint64_t mp_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_mp_enqueue_bulk(r, burst, size) == 0)
+		while (rte_ring_mp_enqueue_bulk(r, burst, size, NULL) == 0)
 			rte_pause();
 	const uint64_t mp_end = rte_rdtsc();
 
@@ -323,14 +323,16 @@ test_burst_enqueue_dequeue(void)
 	for (sz = 0; sz < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); sz++) {
 		const uint64_t sc_start = rte_rdtsc();
 		for (i = 0; i < iterations; i++) {
-			rte_ring_sp_enqueue_burst(r, burst, bulk_sizes[sz]);
+			rte_ring_sp_enqueue_burst(r, burst,
+					bulk_sizes[sz], NULL);
 			rte_ring_sc_dequeue_burst(r, burst, bulk_sizes[sz]);
 		}
 		const uint64_t sc_end = rte_rdtsc();
 
 		const uint64_t mc_start = rte_rdtsc();
 		for (i = 0; i < iterations; i++) {
-			rte_ring_mp_enqueue_burst(r, burst, bulk_sizes[sz]);
+			rte_ring_mp_enqueue_burst(r, burst,
+					bulk_sizes[sz], NULL);
 			rte_ring_mc_dequeue_burst(r, burst, bulk_sizes[sz]);
 		}
 		const uint64_t mc_end = rte_rdtsc();
@@ -357,14 +359,16 @@ test_bulk_enqueue_dequeue(void)
 	for (sz = 0; sz < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); sz++) {
 		const uint64_t sc_start = rte_rdtsc();
 		for (i = 0; i < iterations; i++) {
-			rte_ring_sp_enqueue_bulk(r, burst, bulk_sizes[sz]);
+			rte_ring_sp_enqueue_bulk(r, burst,
+					bulk_sizes[sz], NULL);
 			rte_ring_sc_dequeue_bulk(r, burst, bulk_sizes[sz]);
 		}
 		const uint64_t sc_end = rte_rdtsc();
 
 		const uint64_t mc_start = rte_rdtsc();
 		for (i = 0; i < iterations; i++) {
-			rte_ring_mp_enqueue_bulk(r, burst, bulk_sizes[sz]);
+			rte_ring_mp_enqueue_bulk(r, burst,
+					bulk_sizes[sz], NULL);
 			rte_ring_mc_dequeue_bulk(r, burst, bulk_sizes[sz]);
 		}
 		const uint64_t mc_end = rte_rdtsc();
diff --git a/app/test/test_table_ports.c b/app/test/test_table_ports.c
index 2532367..395f4f3 100644
--- a/app/test/test_table_ports.c
+++ b/app/test/test_table_ports.c
@@ -80,7 +80,7 @@ test_port_ring_reader(void)
 	mbuf[0] = (void *)rte_pktmbuf_alloc(pool);
 
 	expected_pkts = rte_ring_sp_enqueue_burst(port_ring_reader_params.ring,
-		mbuf, 1);
+		mbuf, 1, NULL);
 	received_pkts = rte_port_ring_reader_ops.f_rx(port, res_mbuf, 1);
 
 	if (received_pkts < expected_pkts)
@@ -93,7 +93,7 @@ test_port_ring_reader(void)
 		mbuf[i] = rte_pktmbuf_alloc(pool);
 
 	expected_pkts = rte_ring_sp_enqueue_burst(port_ring_reader_params.ring,
-		(void * const *) mbuf, RTE_PORT_IN_BURST_SIZE_MAX);
+		(void * const *) mbuf, RTE_PORT_IN_BURST_SIZE_MAX, NULL);
 	received_pkts = rte_port_ring_reader_ops.f_rx(port, res_mbuf,
 		RTE_PORT_IN_BURST_SIZE_MAX);
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 6e4dcd8..39e070c 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -380,7 +380,7 @@ virtual_ethdev_tx_burst_success(void *queue, struct rte_mbuf **bufs,
 		nb_pkts = 0;
 	else
 		nb_pkts = rte_ring_enqueue_burst(dev_private->tx_queue, (void **)bufs,
-				nb_pkts);
+				nb_pkts, NULL);
 
 	/* increment opacket count */
 	dev_private->eth_stats.opackets += nb_pkts;
@@ -496,7 +496,7 @@ virtual_ethdev_add_mbufs_to_rx_queue(uint8_t port_id,
 			vrtl_eth_dev->data->dev_private;
 
 	return rte_ring_enqueue_burst(dev_private->rx_queue, (void **)pkt_burst,
-			burst_length);
+			burst_length, NULL);
 }
 
 int
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 6f9cc1a..adbf478 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -102,7 +102,7 @@ eth_ring_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	void **ptrs = (void *)&bufs[0];
 	struct ring_queue *r = q;
 	const uint16_t nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng,
-			ptrs, nb_bufs);
+			ptrs, nb_bufs, NULL);
 	if (r->rng->flags & RING_F_SP_ENQ) {
 		r->tx_pkts.cnt += nb_tx;
 		r->err_pkts.cnt += nb_bufs - nb_tx;
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index e7641d2..cfd360b 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -238,7 +238,8 @@ lcore_rx(struct lcore_params *p)
 			continue;
 		}
 
-		uint16_t sent = rte_ring_enqueue_burst(r, (void *)bufs, nb_ret);
+		uint16_t sent = rte_ring_enqueue_burst(r, (void *)bufs,
+				nb_ret, NULL);
 		app_stats.rx.enqueued_pkts += sent;
 		if (unlikely(sent < nb_ret)) {
 			RTE_LOG_DP(DEBUG, DISTRAPP,
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index 82b10bc..1645994 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -144,7 +144,8 @@ app_lcore_io_rx_buffer_to_send (
 	ret = rte_ring_sp_enqueue_bulk(
 		lp->rx.rings[worker],
 		(void **) lp->rx.mbuf_out[worker].array,
-		bsz);
+		bsz,
+		NULL);
 
 	if (unlikely(ret == 0)) {
 		uint32_t k;
@@ -310,7 +311,8 @@ app_lcore_io_rx_flush(struct app_lcore_params_io *lp, uint32_t n_workers)
 		ret = rte_ring_sp_enqueue_bulk(
 			lp->rx.rings[worker],
 			(void **) lp->rx.mbuf_out[worker].array,
-			lp->rx.mbuf_out[worker].n_mbufs);
+			lp->rx.mbuf_out[worker].n_mbufs,
+			NULL);
 
 		if (unlikely(ret == 0)) {
 			uint32_t k;
@@ -553,7 +555,8 @@ app_lcore_worker(
 			ret = rte_ring_sp_enqueue_bulk(
 				lp->rings_out[port],
 				(void **) lp->mbuf_out[port].array,
-				bsz_wr);
+				bsz_wr,
+				NULL);
 
 #if APP_STATS
 			lp->rings_out_iters[port] ++;
@@ -605,7 +608,8 @@ app_lcore_worker_flush(struct app_lcore_params_worker *lp)
 		ret = rte_ring_sp_enqueue_bulk(
 			lp->rings_out[port],
 			(void **) lp->mbuf_out[port].array,
-			lp->mbuf_out[port].n_mbufs);
+			lp->mbuf_out[port].n_mbufs,
+			NULL);
 
 		if (unlikely(ret == 0)) {
 			uint32_t k;
diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index 19c95b2..c2b0261 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -227,7 +227,7 @@ flush_rx_queue(uint16_t client)
 
 	cl = &clients[client];
 	if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[client].buffer,
-			cl_rx_buf[client].count) == 0){
+			cl_rx_buf[client].count, NULL) == 0){
 		for (j = 0; j < cl_rx_buf[client].count; j++)
 			rte_pktmbuf_free(cl_rx_buf[client].buffer[j]);
 		cl->stats.rx_drop += cl_rx_buf[client].count;
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d4dc789..d268350 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -421,8 +421,8 @@ rx_thread(struct rte_ring *ring_out)
 					pkts[i++]->seqn = seqn++;
 
 				/* enqueue to rx_to_workers ring */
-				ret = rte_ring_enqueue_burst(ring_out, (void *) pkts,
-								nb_rx_pkts);
+				ret = rte_ring_enqueue_burst(ring_out,
+						(void *)pkts, nb_rx_pkts, NULL);
 				app_stats.rx.enqueue_pkts += ret;
 				if (unlikely(ret < nb_rx_pkts)) {
 					app_stats.rx.enqueue_failed_pkts +=
@@ -473,7 +473,8 @@ worker_thread(void *args_ptr)
 			burst_buffer[i++]->port ^= xor_val;
 
 		/* enqueue the modified mbufs to workers_to_tx ring */
-		ret = rte_ring_enqueue_burst(ring_out, (void *)burst_buffer, burst_size);
+		ret = rte_ring_enqueue_burst(ring_out, (void *)burst_buffer,
+				burst_size, NULL);
 		__sync_fetch_and_add(&app_stats.wkr.enqueue_pkts, ret);
 		if (unlikely(ret < burst_size)) {
 			/* Return the mbufs to their respective pool, dropping packets */
diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index dab4594..0c81a15 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -107,7 +107,7 @@ app_rx_thread(struct thread_conf **confs)
 			}
 
 			if (unlikely(rte_ring_sp_enqueue_bulk(conf->rx_ring,
-					(void **)rx_mbufs, nb_rx) == 0)) {
+					(void **)rx_mbufs, nb_rx, NULL) == 0)) {
 				for(i = 0; i < nb_rx; i++) {
 					rte_pktmbuf_free(rx_mbufs[i]);
 
@@ -231,7 +231,7 @@ app_worker_thread(struct thread_conf **confs)
 					burst_conf.qos_dequeue);
 		if (likely(nb_pkt > 0))
 			while (rte_ring_sp_enqueue_bulk(conf->tx_ring,
-					(void **)mbufs, nb_pkt) == 0)
+					(void **)mbufs, nb_pkt, NULL) == 0)
 				; /* empty body */
 
 		conf_idx++;
diff --git a/examples/server_node_efd/server/main.c b/examples/server_node_efd/server/main.c
index 3eb7fac..597b4c2 100644
--- a/examples/server_node_efd/server/main.c
+++ b/examples/server_node_efd/server/main.c
@@ -247,7 +247,7 @@ flush_rx_queue(uint16_t node)
 
 	cl = &nodes[node];
 	if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buffer,
-			cl_rx_buf[node].count) != cl_rx_buf[node].count){
+			cl_rx_buf[node].count, NULL) != cl_rx_buf[node].count){
 		for (j = 0; j < cl_rx_buf[node].count; j++)
 			rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);
 		cl->stats.rx_drop += cl_rx_buf[node].count;
diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c
index 51db006..6552199 100644
--- a/lib/librte_hash/rte_cuckoo_hash.c
+++ b/lib/librte_hash/rte_cuckoo_hash.c
@@ -808,7 +808,7 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i)
 			/* Need to enqueue the free slots in global ring. */
 			n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
 						cached_free_slots->objs,
-						LCORE_CACHE_SIZE);
+						LCORE_CACHE_SIZE, NULL);
 			cached_free_slots->len -= n_slots;
 		}
 		/* Put index of new free slot in cache. */
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
index 409b860..9b8fd2b 100644
--- a/lib/librte_mempool/rte_mempool_ring.c
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -43,7 +43,7 @@ common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		unsigned n)
 {
 	return rte_ring_mp_enqueue_bulk(mp->pool_data,
-			obj_table, n) == 0 ? -ENOBUFS : 0;
+			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
 }
 
 static int
@@ -51,7 +51,7 @@ common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		unsigned n)
 {
 	return rte_ring_sp_enqueue_bulk(mp->pool_data,
-			obj_table, n) == 0 ? -ENOBUFS : 0;
+			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
 }
 
 static int
diff --git a/lib/librte_pdump/rte_pdump.c b/lib/librte_pdump/rte_pdump.c
index a580a6a..d6d3e46 100644
--- a/lib/librte_pdump/rte_pdump.c
+++ b/lib/librte_pdump/rte_pdump.c
@@ -197,7 +197,7 @@ pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
 			dup_bufs[d_pkts++] = p;
 	}
 
-	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts);
+	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
 	if (unlikely(ring_enq < d_pkts)) {
 		RTE_LOG(DEBUG, PDUMP,
 			"only %d of packets enqueued to ring\n", ring_enq);
diff --git a/lib/librte_port/rte_port_ras.c b/lib/librte_port/rte_port_ras.c
index c4bb508..4de0945 100644
--- a/lib/librte_port/rte_port_ras.c
+++ b/lib/librte_port/rte_port_ras.c
@@ -167,7 +167,7 @@ send_burst(struct rte_port_ring_writer_ras *p)
 	uint32_t nb_tx;
 
 	nb_tx = rte_ring_sp_enqueue_burst(p->ring, (void **)p->tx_buf,
-			p->tx_buf_count);
+			p->tx_buf_count, NULL);
 
 	RTE_PORT_RING_WRITER_RAS_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
 	for ( ; nb_tx < p->tx_buf_count; nb_tx++)
diff --git a/lib/librte_port/rte_port_ring.c b/lib/librte_port/rte_port_ring.c
index 3b9d3d0..9fadac7 100644
--- a/lib/librte_port/rte_port_ring.c
+++ b/lib/librte_port/rte_port_ring.c
@@ -241,7 +241,7 @@ send_burst(struct rte_port_ring_writer *p)
 	uint32_t nb_tx;
 
 	nb_tx = rte_ring_sp_enqueue_burst(p->ring, (void **)p->tx_buf,
-			p->tx_buf_count);
+			p->tx_buf_count, NULL);
 
 	RTE_PORT_RING_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
 	for ( ; nb_tx < p->tx_buf_count; nb_tx++)
@@ -256,7 +256,7 @@ send_burst_mp(struct rte_port_ring_writer *p)
 	uint32_t nb_tx;
 
 	nb_tx = rte_ring_mp_enqueue_burst(p->ring, (void **)p->tx_buf,
-			p->tx_buf_count);
+			p->tx_buf_count, NULL);
 
 	RTE_PORT_RING_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
 	for ( ; nb_tx < p->tx_buf_count; nb_tx++)
@@ -318,11 +318,11 @@ rte_port_ring_writer_tx_bulk_internal(void *port,
 
 		RTE_PORT_RING_WRITER_STATS_PKTS_IN_ADD(p, n_pkts);
 		if (is_multi)
-			n_pkts_ok = rte_ring_mp_enqueue_burst(p->ring, (void **)pkts,
-				n_pkts);
+			n_pkts_ok = rte_ring_mp_enqueue_burst(p->ring,
+					(void **)pkts, n_pkts, NULL);
 		else
-			n_pkts_ok = rte_ring_sp_enqueue_burst(p->ring, (void **)pkts,
-				n_pkts);
+			n_pkts_ok = rte_ring_sp_enqueue_burst(p->ring,
+					(void **)pkts, n_pkts, NULL);
 
 		RTE_PORT_RING_WRITER_STATS_PKTS_DROP_ADD(p, n_pkts - n_pkts_ok);
 		for ( ; n_pkts_ok < n_pkts; n_pkts_ok++) {
@@ -517,7 +517,7 @@ send_burst_nodrop(struct rte_port_ring_writer_nodrop *p)
 	uint32_t nb_tx = 0, i;
 
 	nb_tx = rte_ring_sp_enqueue_burst(p->ring, (void **)p->tx_buf,
-				p->tx_buf_count);
+				p->tx_buf_count, NULL);
 
 	/* We sent all the packets in a first try */
 	if (nb_tx >= p->tx_buf_count) {
@@ -527,7 +527,8 @@ send_burst_nodrop(struct rte_port_ring_writer_nodrop *p)
 
 	for (i = 0; i < p->n_retries; i++) {
 		nb_tx += rte_ring_sp_enqueue_burst(p->ring,
-				(void **) (p->tx_buf + nb_tx), p->tx_buf_count - nb_tx);
+				(void **) (p->tx_buf + nb_tx),
+				p->tx_buf_count - nb_tx, NULL);
 
 		/* We sent all the packets in more than one try */
 		if (nb_tx >= p->tx_buf_count) {
@@ -550,7 +551,7 @@ send_burst_mp_nodrop(struct rte_port_ring_writer_nodrop *p)
 	uint32_t nb_tx = 0, i;
 
 	nb_tx = rte_ring_mp_enqueue_burst(p->ring, (void **)p->tx_buf,
-				p->tx_buf_count);
+				p->tx_buf_count, NULL);
 
 	/* We sent all the packets in a first try */
 	if (nb_tx >= p->tx_buf_count) {
@@ -560,7 +561,8 @@ send_burst_mp_nodrop(struct rte_port_ring_writer_nodrop *p)
 
 	for (i = 0; i < p->n_retries; i++) {
 		nb_tx += rte_ring_mp_enqueue_burst(p->ring,
-				(void **) (p->tx_buf + nb_tx), p->tx_buf_count - nb_tx);
+				(void **) (p->tx_buf + nb_tx),
+				p->tx_buf_count - nb_tx, NULL);
 
 		/* We sent all the packets in more than one try */
 		if (nb_tx >= p->tx_buf_count) {
@@ -633,10 +635,12 @@ rte_port_ring_writer_nodrop_tx_bulk_internal(void *port,
 		RTE_PORT_RING_WRITER_NODROP_STATS_PKTS_IN_ADD(p, n_pkts);
 		if (is_multi)
 			n_pkts_ok =
-				rte_ring_mp_enqueue_burst(p->ring, (void **)pkts, n_pkts);
+				rte_ring_mp_enqueue_burst(p->ring,
+						(void **)pkts, n_pkts, NULL);
 		else
 			n_pkts_ok =
-				rte_ring_sp_enqueue_burst(p->ring, (void **)pkts, n_pkts);
+				rte_ring_sp_enqueue_burst(p->ring,
+						(void **)pkts, n_pkts, NULL);
 
 		if (n_pkts_ok >= n_pkts)
 			return 0;
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index d4d44ce..2f8995c 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -354,20 +354,16 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  */
 static inline unsigned int __attribute__((always_inline))
 __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
-			 unsigned n, enum rte_ring_queue_behavior behavior)
+			 unsigned int n, enum rte_ring_queue_behavior behavior,
+			 unsigned int *free_space)
 {
 	uint32_t prod_head, prod_next;
 	uint32_t cons_tail, free_entries;
-	const unsigned max = n;
+	const unsigned int max = n;
 	int success;
 	unsigned int i;
 	uint32_t mask = r->mask;
 
-	/* Avoid the unnecessary cmpset operation below, which is also
-	 * potentially harmful when n equals 0. */
-	if (n == 0)
-		return 0;
-
 	/* move prod.head atomically */
 	do {
 		/* Reset n to the initial burst count */
@@ -382,16 +378,12 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 		free_entries = (mask + cons_tail - prod_head);
 
 		/* check that we have enough room in ring */
-		if (unlikely(n > free_entries)) {
-			if (behavior == RTE_RING_QUEUE_FIXED)
-				return 0;
-			else {
-				/* No free entry available */
-				if (unlikely(free_entries == 0))
-					return 0;
-				n = free_entries;
-			}
-		}
+		if (unlikely(n > free_entries))
+			n = (behavior == RTE_RING_QUEUE_FIXED) ?
+					0 : free_entries;
+
+		if (n == 0)
+			goto end;
 
 		prod_next = prod_head + n;
 		success = rte_atomic32_cmpset(&r->prod.head, prod_head,
@@ -410,6 +402,9 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 		rte_pause();
 
 	r->prod.tail = prod_next;
+end:
+	if (free_space != NULL)
+		*free_space = free_entries - n;
 	return n;
 }
 
@@ -435,7 +430,8 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
  */
 static inline unsigned int __attribute__((always_inline))
 __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
-			 unsigned n, enum rte_ring_queue_behavior behavior)
+			 unsigned int n, enum rte_ring_queue_behavior behavior,
+			 unsigned int *free_space)
 {
 	uint32_t prod_head, cons_tail;
 	uint32_t prod_next, free_entries;
@@ -451,16 +447,12 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	free_entries = mask + cons_tail - prod_head;
 
 	/* check that we have enough room in ring */
-	if (unlikely(n > free_entries)) {
-		if (behavior == RTE_RING_QUEUE_FIXED)
-			return 0;
-		else {
-			/* No free entry available */
-			if (unlikely(free_entries == 0))
-				return 0;
-			n = free_entries;
-		}
-	}
+	if (unlikely(n > free_entries))
+		n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : free_entries;
+
+	if (n == 0)
+		goto end;
+
 
 	prod_next = prod_head + n;
 	r->prod.head = prod_next;
@@ -470,6 +462,9 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	rte_smp_wmb();
 
 	r->prod.tail = prod_next;
+end:
+	if (free_space != NULL)
+		*free_space = free_entries - n;
 	return n;
 }
 
@@ -639,9 +634,10 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
  */
 static inline unsigned int __attribute__((always_inline))
 rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
+			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			free_space);
 }
 
 /**
@@ -658,9 +654,10 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  */
 static inline unsigned int __attribute__((always_inline))
 rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
+			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			free_space);
 }
 
 /**
@@ -681,12 +678,12 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
  */
 static inline unsigned int __attribute__((always_inline))
 rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
-		      unsigned n)
+		      unsigned int n, unsigned int *free_space)
 {
 	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue_bulk(r, obj_table, n);
+		return rte_ring_sp_enqueue_bulk(r, obj_table, n, free_space);
 	else
-		return rte_ring_mp_enqueue_bulk(r, obj_table, n);
+		return rte_ring_mp_enqueue_bulk(r, obj_table, n, free_space);
 }
 
 /**
@@ -706,7 +703,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 static inline int __attribute__((always_inline))
 rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
 {
-	return rte_ring_mp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
+	return rte_ring_mp_enqueue_bulk(r, &obj, 1, NULL) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -723,7 +720,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
 static inline int __attribute__((always_inline))
 rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
 {
-	return rte_ring_sp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
+	return rte_ring_sp_enqueue_bulk(r, &obj, 1, NULL) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -744,7 +741,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
 static inline int __attribute__((always_inline))
 rte_ring_enqueue(struct rte_ring *r, void *obj)
 {
-	return rte_ring_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
+	return rte_ring_enqueue_bulk(r, &obj, 1, NULL) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -990,9 +987,10 @@ struct rte_ring *rte_ring_lookup(const char *name);
  */
 static inline unsigned __attribute__((always_inline))
 rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
+			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+	return __rte_ring_mp_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, free_space);
 }
 
 /**
@@ -1009,9 +1007,10 @@ rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
  */
 static inline unsigned __attribute__((always_inline))
 rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
+			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+	return __rte_ring_sp_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, free_space);
 }
 
 /**
@@ -1032,12 +1031,12 @@ rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
  */
 static inline unsigned __attribute__((always_inline))
 rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,
-		      unsigned n)
+		      unsigned int n, unsigned int *free_space)
 {
 	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue_burst(r, obj_table, n);
+		return rte_ring_sp_enqueue_burst(r, obj_table, n, free_space);
 	else
-		return rte_ring_mp_enqueue_burst(r, obj_table, n);
+		return rte_ring_mp_enqueue_burst(r, obj_table, n, free_space);
 }
 
 /**
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 12/19] examples/quota_watermark: use ring space for watermarks
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (13 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 11/19] ring: allow enq fns to return free space value Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 13/19] ring: allow dequeue fns to return remaining entry count Bruce Richardson
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Now that the enqueue function returns the amount of space in the ring,
we can use that to replace the old watermark functionality. Update the
example app to do so, and re-enable it in the examples Makefile.

NOTE: RFC, THIS IS NOT YET TESTED

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 examples/Makefile                         |  2 +-
 examples/quota_watermark/qw/init.c        |  5 +++--
 examples/quota_watermark/qw/main.c        | 10 ++++++----
 examples/quota_watermark/qw/main.h        |  1 +
 examples/quota_watermark/qwctl/commands.c |  2 +-
 examples/quota_watermark/qwctl/qwctl.c    |  2 ++
 examples/quota_watermark/qwctl/qwctl.h    |  1 +
 7 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/examples/Makefile b/examples/Makefile
index 19cd5ad..da2bfdd 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -81,7 +81,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_REORDER) += packet_ordering
 DIRS-$(CONFIG_RTE_LIBRTE_IEEE1588) += ptpclient
 DIRS-$(CONFIG_RTE_LIBRTE_METER) += qos_meter
 DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += qos_sched
-#DIRS-y += quota_watermark
+DIRS-y += quota_watermark
 DIRS-$(CONFIG_RTE_ETHDEV_RXTX_CALLBACKS) += rxtx_callbacks
 DIRS-y += skeleton
 ifeq ($(CONFIG_RTE_LIBRTE_HASH),y)
diff --git a/examples/quota_watermark/qw/init.c b/examples/quota_watermark/qw/init.c
index c208721..1c8f302 100644
--- a/examples/quota_watermark/qw/init.c
+++ b/examples/quota_watermark/qw/init.c
@@ -137,7 +137,7 @@ void init_ring(int lcore_id, uint8_t port_id)
     if (ring == NULL)
         rte_exit(EXIT_FAILURE, "%s\n", rte_strerror(rte_errno));
 
-    rte_ring_set_water_mark(ring, 80 * RING_SIZE / 100);
+    *high_watermark = 80 * RING_SIZE / 100;
 
     rings[lcore_id][port_id] = ring;
 }
@@ -164,11 +164,12 @@ setup_shared_variables(void)
 {
     const struct rte_memzone *qw_memzone;
 
-    qw_memzone = rte_memzone_reserve(QUOTA_WATERMARK_MEMZONE_NAME, 2 * sizeof(int),
+    qw_memzone = rte_memzone_reserve(QUOTA_WATERMARK_MEMZONE_NAME, 3 * sizeof(int),
                                      rte_socket_id(), RTE_MEMZONE_2MB);
     if (qw_memzone == NULL)
         rte_exit(EXIT_FAILURE, "%s\n", rte_strerror(rte_errno));
 
     quota = qw_memzone->addr;
     low_watermark = (unsigned int *) qw_memzone->addr + 1;
+    high_watermark = (unsigned int *) qw_memzone->addr + 2;
 }
diff --git a/examples/quota_watermark/qw/main.c b/examples/quota_watermark/qw/main.c
index 9162e28..8fb7eb1 100644
--- a/examples/quota_watermark/qw/main.c
+++ b/examples/quota_watermark/qw/main.c
@@ -67,6 +67,7 @@ struct ether_fc_frame {
 
 int *quota;
 unsigned int *low_watermark;
+unsigned int *high_watermark;
 
 uint8_t port_pairs[RTE_MAX_ETHPORTS];
 
@@ -157,6 +158,7 @@ receive_stage(__attribute__((unused)) void *args)
     uint16_t nb_rx_pkts;
 
     unsigned int lcore_id;
+    unsigned int free;
 
     struct rte_mbuf *pkts[MAX_PKT_QUOTA];
     struct rte_ring *ring;
@@ -186,13 +188,13 @@ receive_stage(__attribute__((unused)) void *args)
 
             /* Enqueue received packets on the RX ring */
             nb_rx_pkts = rte_eth_rx_burst(port_id, 0, pkts, (uint16_t) *quota);
-            ret = rte_ring_enqueue_bulk(ring, (void *) pkts, nb_rx_pkts);
-            if (ret == -EDQUOT) {
+            ret = rte_ring_enqueue_bulk(ring, (void *) pkts, nb_rx_pkts, &free);
+            if (RING_SIZE - free > *high_watermark) {
                 ring_state[port_id] = RING_OVERLOADED;
                 send_pause_frame(port_id, 1337);
             }
 
-            else if (ret == -ENOBUFS) {
+            else if (ret == 0) {
 
                 /* Return  mbufs to the pool, effectively dropping packets */
                 for (i = 0; i < nb_rx_pkts; i++)
@@ -246,7 +248,7 @@ pipeline_stage(__attribute__((unused)) void *args)
                 continue;
 
             /* Enqueue them on tx */
-            ret = rte_ring_enqueue_bulk(tx, pkts, nb_dq_pkts);
+            ret = rte_ring_enqueue_bulk(tx, pkts, nb_dq_pkts, NULL);
             if (ret == -EDQUOT)
                 ring_state[port_id] = RING_OVERLOADED;
 
diff --git a/examples/quota_watermark/qw/main.h b/examples/quota_watermark/qw/main.h
index 6b36489..d17fe95 100644
--- a/examples/quota_watermark/qw/main.h
+++ b/examples/quota_watermark/qw/main.h
@@ -43,6 +43,7 @@ enum ring_state {
 
 extern int *quota;
 extern unsigned int *low_watermark;
+extern unsigned int *high_watermark;
 
 extern uint8_t port_pairs[RTE_MAX_ETHPORTS];
 
diff --git a/examples/quota_watermark/qwctl/commands.c b/examples/quota_watermark/qwctl/commands.c
index 5348dd3..8de6a9a 100644
--- a/examples/quota_watermark/qwctl/commands.c
+++ b/examples/quota_watermark/qwctl/commands.c
@@ -137,7 +137,7 @@ cmd_set_handler(__attribute__((unused)) void *parsed_result,
         else
             if (tokens->value >= *low_watermark * 100 / RING_SIZE
              && tokens->value <= 100)
-                rte_ring_set_water_mark(ring, tokens->value * RING_SIZE / 100);
+                *high_watermark = tokens->value * RING_SIZE / 100;
             else
                 cmdline_printf(cl, "ring high watermark must be between %u%% "
                                    "and 100%%\n", *low_watermark * 100 / RING_SIZE);
diff --git a/examples/quota_watermark/qwctl/qwctl.c b/examples/quota_watermark/qwctl/qwctl.c
index 29c501c..107101f 100644
--- a/examples/quota_watermark/qwctl/qwctl.c
+++ b/examples/quota_watermark/qwctl/qwctl.c
@@ -55,6 +55,7 @@
 
 int *quota;
 unsigned int *low_watermark;
+unsigned int *high_watermark;
 
 
 static void
@@ -68,6 +69,7 @@ setup_shared_variables(void)
 
     quota = qw_memzone->addr;
     low_watermark = (unsigned int *) qw_memzone->addr + 1;
+    high_watermark = (unsigned int *) qw_memzone->addr + 2;
 }
 
 int main(int argc, char **argv)
diff --git a/examples/quota_watermark/qwctl/qwctl.h b/examples/quota_watermark/qwctl/qwctl.h
index 8d146e5..545914b 100644
--- a/examples/quota_watermark/qwctl/qwctl.h
+++ b/examples/quota_watermark/qwctl/qwctl.h
@@ -36,5 +36,6 @@
 
 extern int *quota;
 extern unsigned int *low_watermark;
+extern unsigned int *high_watermark;
 
 #endif /* _MAIN_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 13/19] ring: allow dequeue fns to return remaining entry count
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (14 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 12/19] examples/quota_watermark: use ring space for watermarks Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 14/19] ring: reduce scope of local variables Bruce Richardson
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Add an extra parameter to the ring dequeue burst/bulk functions so that
those functions can optionally return the amount of remaining objs in the
ring. This information can be used by applications in a number of ways,
for instance, with single-consumer queues, it provides a max
dequeue size which is guaranteed to work.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/pdump/main.c                                   |  2 +-
 app/test-pipeline/runtime.c                        |  6 +-
 app/test/test_link_bonding_mode4.c                 |  3 +-
 app/test/test_pmd_ring_perf.c                      |  7 +-
 app/test/test_ring.c                               | 54 ++++++-------
 app/test/test_ring_perf.c                          | 20 +++--
 app/test/test_table_acl.c                          |  2 +-
 app/test/test_table_pipeline.c                     |  2 +-
 app/test/test_table_ports.c                        |  8 +-
 app/test/virtual_pmd.c                             |  4 +-
 drivers/crypto/null/null_crypto_pmd.c              |  2 +-
 drivers/net/bonding/rte_eth_bond_pmd.c             |  3 +-
 drivers/net/ring/rte_eth_ring.c                    |  2 +-
 examples/distributor/main.c                        |  2 +-
 examples/load_balancer/runtime.c                   |  6 +-
 .../client_server_mp/mp_client/client.c            |  3 +-
 examples/packet_ordering/main.c                    |  6 +-
 examples/qos_sched/app_thread.c                    |  6 +-
 examples/quota_watermark/qw/main.c                 |  5 +-
 examples/server_node_efd/node/node.c               |  2 +-
 lib/librte_hash/rte_cuckoo_hash.c                  |  3 +-
 lib/librte_mempool/rte_mempool_ring.c              |  4 +-
 lib/librte_port/rte_port_frag.c                    |  3 +-
 lib/librte_port/rte_port_ring.c                    |  6 +-
 lib/librte_ring/rte_ring.h                         | 90 +++++++++++-----------
 25 files changed, 137 insertions(+), 114 deletions(-)

diff --git a/app/pdump/main.c b/app/pdump/main.c
index b88090d..3b13753 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -496,7 +496,7 @@ pdump_rxtx(struct rte_ring *ring, uint8_t vdev_id, struct pdump_stats *stats)
 
 	/* first dequeue packets from ring of primary process */
 	const uint16_t nb_in_deq = rte_ring_dequeue_burst(ring,
-			(void *)rxtx_bufs, BURST_SIZE);
+			(void *)rxtx_bufs, BURST_SIZE, NULL);
 	stats->dequeue_pkts += nb_in_deq;
 
 	if (nb_in_deq) {
diff --git a/app/test-pipeline/runtime.c b/app/test-pipeline/runtime.c
index c06ff54..8970e1c 100644
--- a/app/test-pipeline/runtime.c
+++ b/app/test-pipeline/runtime.c
@@ -121,7 +121,8 @@ app_main_loop_worker(void) {
 		ret = rte_ring_sc_dequeue_bulk(
 			app.rings_rx[i],
 			(void **) worker_mbuf->array,
-			app.burst_size_worker_read);
+			app.burst_size_worker_read,
+			NULL);
 
 		if (ret == 0)
 			continue;
@@ -151,7 +152,8 @@ app_main_loop_tx(void) {
 		ret = rte_ring_sc_dequeue_bulk(
 			app.rings_tx[i],
 			(void **) &app.mbuf_tx[i].array[n_mbufs],
-			app.burst_size_tx_read);
+			app.burst_size_tx_read,
+			NULL);
 
 		if (ret == 0)
 			continue;
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 8df28b4..15091b1 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -193,7 +193,8 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
 static int
 slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf, size);
+	return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+			size, NULL);
 }
 
 /*
diff --git a/app/test/test_pmd_ring_perf.c b/app/test/test_pmd_ring_perf.c
index 045a7f2..004882a 100644
--- a/app/test/test_pmd_ring_perf.c
+++ b/app/test/test_pmd_ring_perf.c
@@ -67,7 +67,7 @@ test_empty_dequeue(void)
 
 	const uint64_t sc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		rte_ring_sc_dequeue_bulk(r, burst, bulk_sizes[0]);
+		rte_ring_sc_dequeue_bulk(r, burst, bulk_sizes[0], NULL);
 	const uint64_t sc_end = rte_rdtsc();
 
 	const uint64_t eth_start = rte_rdtsc();
@@ -99,7 +99,7 @@ test_single_enqueue_dequeue(void)
 	rte_compiler_barrier();
 	for (i = 0; i < iterations; i++) {
 		rte_ring_enqueue_bulk(r, &burst, 1, NULL);
-		rte_ring_dequeue_bulk(r, &burst, 1);
+		rte_ring_dequeue_bulk(r, &burst, 1, NULL);
 	}
 	const uint64_t sc_end = rte_rdtsc_precise();
 	rte_compiler_barrier();
@@ -133,7 +133,8 @@ test_bulk_enqueue_dequeue(void)
 		for (i = 0; i < iterations; i++) {
 			rte_ring_sp_enqueue_bulk(r, (void *)burst,
 					bulk_sizes[sz], NULL);
-			rte_ring_sc_dequeue_bulk(r, (void *)burst, bulk_sizes[sz]);
+			rte_ring_sc_dequeue_bulk(r, (void *)burst,
+					bulk_sizes[sz], NULL);
 		}
 		const uint64_t sc_end = rte_rdtsc();
 
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index aa2a711..5b61ef1 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -119,7 +119,8 @@ test_ring_basic_full_empty(void * const src[], void *dst[])
 		    __func__, i, rand);
 		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src,
 				rand, NULL));
-		TEST_RING_VERIFY(rand == rte_ring_dequeue_bulk(r, dst, rand));
+		TEST_RING_VERIFY(rand == rte_ring_dequeue_bulk(r, dst,
+				rand, NULL));
 
 		/* fill the ring */
 		TEST_RING_VERIFY(0 != rte_ring_enqueue_bulk(r, src, rsz, NULL));
@@ -129,7 +130,8 @@ test_ring_basic_full_empty(void * const src[], void *dst[])
 		TEST_RING_VERIFY(0 == rte_ring_empty(r));
 
 		/* empty the ring */
-		TEST_RING_VERIFY(rsz == rte_ring_dequeue_bulk(r, dst, rsz));
+		TEST_RING_VERIFY(rsz == rte_ring_dequeue_bulk(r, dst,
+				rsz, NULL));
 		TEST_RING_VERIFY(rsz == rte_ring_free_count(r));
 		TEST_RING_VERIFY(0 == rte_ring_count(r));
 		TEST_RING_VERIFY(0 == rte_ring_full(r));
@@ -186,19 +188,19 @@ test_ring_basic(void)
 		goto fail;
 
 	printf("dequeue 1 obj\n");
-	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1);
+	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1, NULL);
 	cur_dst += 1;
 	if (ret == 0)
 		goto fail;
 
 	printf("dequeue 2 objs\n");
-	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2);
+	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if (ret == 0)
 		goto fail;
 
 	printf("dequeue MAX_BULK objs\n");
-	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK);
+	ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL);
 	cur_dst += MAX_BULK;
 	if (ret == 0)
 		goto fail;
@@ -232,19 +234,19 @@ test_ring_basic(void)
 		goto fail;
 
 	printf("dequeue 1 obj\n");
-	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1);
+	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1, NULL);
 	cur_dst += 1;
 	if (ret == 0)
 		goto fail;
 
 	printf("dequeue 2 objs\n");
-	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2);
+	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if (ret == 0)
 		goto fail;
 
 	printf("dequeue MAX_BULK objs\n");
-	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK);
+	ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL);
 	cur_dst += MAX_BULK;
 	if (ret == 0)
 		goto fail;
@@ -265,7 +267,7 @@ test_ring_basic(void)
 		cur_src += MAX_BULK;
 		if (ret == 0)
 			goto fail;
-		ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK);
+		ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL);
 		cur_dst += MAX_BULK;
 		if (ret == 0)
 			goto fail;
@@ -303,13 +305,13 @@ test_ring_basic(void)
 		printf("Cannot enqueue\n");
 		goto fail;
 	}
-	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
+	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems, NULL);
 	cur_dst += num_elems;
 	if (ret == 0) {
 		printf("Cannot dequeue\n");
 		goto fail;
 	}
-	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
+	ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems, NULL);
 	cur_dst += num_elems;
 	if (ret == 0) {
 		printf("Cannot dequeue2\n");
@@ -390,19 +392,19 @@ test_ring_burst_basic(void)
 		goto fail;
 
 	printf("dequeue 1 obj\n");
-	ret = rte_ring_sc_dequeue_burst(r, cur_dst, 1) ;
+	ret = rte_ring_sc_dequeue_burst(r, cur_dst, 1, NULL) ;
 	cur_dst += 1;
 	if ((ret & RTE_RING_SZ_MASK) != 1)
 		goto fail;
 
 	printf("dequeue 2 objs\n");
-	ret = rte_ring_sc_dequeue_burst(r, cur_dst, 2);
+	ret = rte_ring_sc_dequeue_burst(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
 	printf("dequeue MAX_BULK objs\n");
-	ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK);
+	ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 	cur_dst += MAX_BULK;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 		goto fail;
@@ -451,19 +453,19 @@ test_ring_burst_basic(void)
 
 	printf("Test dequeue without enough objects \n");
 	for (i = 0; i<RING_SIZE/MAX_BULK - 1; i++) {
-		ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK);
+		ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 		cur_dst += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 			goto fail;
 	}
 
 	/* Available memory space for the exact MAX_BULK entries */
-	ret = rte_ring_sc_dequeue_burst(r, cur_dst, 2);
+	ret = rte_ring_sc_dequeue_burst(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
-	ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK);
+	ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 	cur_dst += MAX_BULK - 3;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK - 3)
 		goto fail;
@@ -505,19 +507,19 @@ test_ring_burst_basic(void)
 		goto fail;
 
 	printf("dequeue 1 obj\n");
-	ret = rte_ring_mc_dequeue_burst(r, cur_dst, 1);
+	ret = rte_ring_mc_dequeue_burst(r, cur_dst, 1, NULL);
 	cur_dst += 1;
 	if ((ret & RTE_RING_SZ_MASK) != 1)
 		goto fail;
 
 	printf("dequeue 2 objs\n");
-	ret = rte_ring_mc_dequeue_burst(r, cur_dst, 2);
+	ret = rte_ring_mc_dequeue_burst(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
 	printf("dequeue MAX_BULK objs\n");
-	ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK);
+	ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 	cur_dst += MAX_BULK;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 		goto fail;
@@ -539,7 +541,7 @@ test_ring_burst_basic(void)
 		cur_src += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 			goto fail;
-		ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK);
+		ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 		cur_dst += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 			goto fail;
@@ -578,19 +580,19 @@ test_ring_burst_basic(void)
 
 	printf("Test dequeue without enough objects \n");
 	for (i = 0; i<RING_SIZE/MAX_BULK - 1; i++) {
-		ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK);
+		ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 		cur_dst += MAX_BULK;
 		if ((ret & RTE_RING_SZ_MASK) != MAX_BULK)
 			goto fail;
 	}
 
 	/* Available objects - the exact MAX_BULK */
-	ret = rte_ring_mc_dequeue_burst(r, cur_dst, 2);
+	ret = rte_ring_mc_dequeue_burst(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
-	ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK);
+	ret = rte_ring_mc_dequeue_burst(r, cur_dst, MAX_BULK, NULL);
 	cur_dst += MAX_BULK - 3;
 	if ((ret & RTE_RING_SZ_MASK) != MAX_BULK - 3)
 		goto fail;
@@ -613,7 +615,7 @@ test_ring_burst_basic(void)
 	if ((ret & RTE_RING_SZ_MASK) != 2)
 		goto fail;
 
-	ret = rte_ring_dequeue_burst(r, cur_dst, 2);
+	ret = rte_ring_dequeue_burst(r, cur_dst, 2, NULL);
 	cur_dst += 2;
 	if (ret != 2)
 		goto fail;
@@ -753,7 +755,7 @@ test_ring_basic_ex(void)
 		goto fail_test;
 	}
 
-	ret = rte_ring_dequeue_burst(rp, obj, 2);
+	ret = rte_ring_dequeue_burst(rp, obj, 2, NULL);
 	if (ret != 2) {
 		printf("test_ring_basic_ex: rte_ring_dequeue_burst fails \n");
 		goto fail_test;
diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c
index f95a8e9..ed89896 100644
--- a/app/test/test_ring_perf.c
+++ b/app/test/test_ring_perf.c
@@ -152,12 +152,12 @@ test_empty_dequeue(void)
 
 	const uint64_t sc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		rte_ring_sc_dequeue_bulk(r, burst, bulk_sizes[0]);
+		rte_ring_sc_dequeue_bulk(r, burst, bulk_sizes[0], NULL);
 	const uint64_t sc_end = rte_rdtsc();
 
 	const uint64_t mc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		rte_ring_mc_dequeue_bulk(r, burst, bulk_sizes[0]);
+		rte_ring_mc_dequeue_bulk(r, burst, bulk_sizes[0], NULL);
 	const uint64_t mc_end = rte_rdtsc();
 
 	printf("SC empty dequeue: %.2F\n",
@@ -230,13 +230,13 @@ dequeue_bulk(void *p)
 
 	const uint64_t sc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_sc_dequeue_bulk(r, burst, size) == 0)
+		while (rte_ring_sc_dequeue_bulk(r, burst, size, NULL) == 0)
 			rte_pause();
 	const uint64_t sc_end = rte_rdtsc();
 
 	const uint64_t mc_start = rte_rdtsc();
 	for (i = 0; i < iterations; i++)
-		while (rte_ring_mc_dequeue_bulk(r, burst, size) == 0)
+		while (rte_ring_mc_dequeue_bulk(r, burst, size, NULL) == 0)
 			rte_pause();
 	const uint64_t mc_end = rte_rdtsc();
 
@@ -325,7 +325,8 @@ test_burst_enqueue_dequeue(void)
 		for (i = 0; i < iterations; i++) {
 			rte_ring_sp_enqueue_burst(r, burst,
 					bulk_sizes[sz], NULL);
-			rte_ring_sc_dequeue_burst(r, burst, bulk_sizes[sz]);
+			rte_ring_sc_dequeue_burst(r, burst,
+					bulk_sizes[sz], NULL);
 		}
 		const uint64_t sc_end = rte_rdtsc();
 
@@ -333,7 +334,8 @@ test_burst_enqueue_dequeue(void)
 		for (i = 0; i < iterations; i++) {
 			rte_ring_mp_enqueue_burst(r, burst,
 					bulk_sizes[sz], NULL);
-			rte_ring_mc_dequeue_burst(r, burst, bulk_sizes[sz]);
+			rte_ring_mc_dequeue_burst(r, burst,
+					bulk_sizes[sz], NULL);
 		}
 		const uint64_t mc_end = rte_rdtsc();
 
@@ -361,7 +363,8 @@ test_bulk_enqueue_dequeue(void)
 		for (i = 0; i < iterations; i++) {
 			rte_ring_sp_enqueue_bulk(r, burst,
 					bulk_sizes[sz], NULL);
-			rte_ring_sc_dequeue_bulk(r, burst, bulk_sizes[sz]);
+			rte_ring_sc_dequeue_bulk(r, burst,
+					bulk_sizes[sz], NULL);
 		}
 		const uint64_t sc_end = rte_rdtsc();
 
@@ -369,7 +372,8 @@ test_bulk_enqueue_dequeue(void)
 		for (i = 0; i < iterations; i++) {
 			rte_ring_mp_enqueue_bulk(r, burst,
 					bulk_sizes[sz], NULL);
-			rte_ring_mc_dequeue_bulk(r, burst, bulk_sizes[sz]);
+			rte_ring_mc_dequeue_bulk(r, burst,
+					bulk_sizes[sz], NULL);
 		}
 		const uint64_t mc_end = rte_rdtsc();
 
diff --git a/app/test/test_table_acl.c b/app/test/test_table_acl.c
index b3bfda4..4d43be7 100644
--- a/app/test/test_table_acl.c
+++ b/app/test/test_table_acl.c
@@ -713,7 +713,7 @@ test_pipeline_single_filter(int expected_count)
 		void *objs[RING_TX_SIZE];
 		struct rte_mbuf *mbuf;
 
-		ret = rte_ring_sc_dequeue_burst(rings_tx[i], objs, 10);
+		ret = rte_ring_sc_dequeue_burst(rings_tx[i], objs, 10, NULL);
 		if (ret <= 0) {
 			printf("Got no objects from ring %d - error code %d\n",
 				i, ret);
diff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c
index 36bfeda..b58aa5d 100644
--- a/app/test/test_table_pipeline.c
+++ b/app/test/test_table_pipeline.c
@@ -494,7 +494,7 @@ test_pipeline_single_filter(int test_type, int expected_count)
 		void *objs[RING_TX_SIZE];
 		struct rte_mbuf *mbuf;
 
-		ret = rte_ring_sc_dequeue_burst(rings_tx[i], objs, 10);
+		ret = rte_ring_sc_dequeue_burst(rings_tx[i], objs, 10, NULL);
 		if (ret <= 0)
 			printf("Got no objects from ring %d - error code %d\n",
 				i, ret);
diff --git a/app/test/test_table_ports.c b/app/test/test_table_ports.c
index 395f4f3..39592ce 100644
--- a/app/test/test_table_ports.c
+++ b/app/test/test_table_ports.c
@@ -163,7 +163,7 @@ test_port_ring_writer(void)
 	rte_port_ring_writer_ops.f_flush(port);
 	expected_pkts = 1;
 	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
-		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz, NULL);
 
 	if (received_pkts < expected_pkts)
 		return -7;
@@ -178,7 +178,7 @@ test_port_ring_writer(void)
 
 	expected_pkts = RTE_PORT_IN_BURST_SIZE_MAX;
 	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
-		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz, NULL);
 
 	if (received_pkts < expected_pkts)
 		return -8;
@@ -193,7 +193,7 @@ test_port_ring_writer(void)
 
 	expected_pkts = RTE_PORT_IN_BURST_SIZE_MAX;
 	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
-		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz, NULL);
 
 	if (received_pkts < expected_pkts)
 		return -8;
@@ -208,7 +208,7 @@ test_port_ring_writer(void)
 
 	expected_pkts = RTE_PORT_IN_BURST_SIZE_MAX;
 	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
-		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz, NULL);
 
 	if (received_pkts < expected_pkts)
 		return -9;
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 39e070c..b209355 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -342,7 +342,7 @@ virtual_ethdev_rx_burst_success(void *queue __rte_unused,
 	dev_private = vrtl_eth_dev->data->dev_private;
 
 	rx_count = rte_ring_dequeue_burst(dev_private->rx_queue, (void **) bufs,
-			nb_pkts);
+			nb_pkts, NULL);
 
 	/* increments ipackets count */
 	dev_private->eth_stats.ipackets += rx_count;
@@ -508,7 +508,7 @@ virtual_ethdev_get_mbufs_from_tx_queue(uint8_t port_id,
 
 	dev_private = vrtl_eth_dev->data->dev_private;
 	return rte_ring_dequeue_burst(dev_private->tx_queue, (void **)pkt_burst,
-		burst_length);
+		burst_length, NULL);
 }
 
 static uint8_t
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index ed5a9fc..f68ec8d 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -155,7 +155,7 @@ null_crypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
 	unsigned nb_dequeued;
 
 	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
-			(void **)ops, nb_ops);
+			(void **)ops, nb_ops, NULL);
 	qp->qp_stats.dequeued_count += nb_dequeued;
 
 	return nb_dequeued;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f3ac9e2..96638af 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1008,7 +1008,8 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		struct port *port = &mode_8023ad_ports[slaves[i]];
 
 		slave_slow_nb_pkts[i] = rte_ring_dequeue_burst(port->tx_ring,
-				slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS);
+				slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS,
+				NULL);
 		slave_nb_pkts[i] = slave_slow_nb_pkts[i];
 
 		for (j = 0; j < slave_slow_nb_pkts[i]; j++)
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index adbf478..77ef3a1 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -88,7 +88,7 @@ eth_ring_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	void **ptrs = (void *)&bufs[0];
 	struct ring_queue *r = q;
 	const uint16_t nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng,
-			ptrs, nb_bufs);
+			ptrs, nb_bufs, NULL);
 	if (r->rng->flags & RING_F_SC_DEQ)
 		r->rx_pkts.cnt += nb_rx;
 	else
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index cfd360b..5cb6185 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -330,7 +330,7 @@ lcore_tx(struct rte_ring *in_r)
 
 			struct rte_mbuf *bufs[BURST_SIZE];
 			const uint16_t nb_rx = rte_ring_dequeue_burst(in_r,
-					(void *)bufs, BURST_SIZE);
+					(void *)bufs, BURST_SIZE, NULL);
 			app_stats.tx.dequeue_pkts += nb_rx;
 
 			/* if we get no traffic, flush anything we have */
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index 1645994..8192c08 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -349,7 +349,8 @@ app_lcore_io_tx(
 			ret = rte_ring_sc_dequeue_bulk(
 				ring,
 				(void **) &lp->tx.mbuf_out[port].array[n_mbufs],
-				bsz_rd);
+				bsz_rd,
+				NULL);
 
 			if (unlikely(ret == 0))
 				continue;
@@ -504,7 +505,8 @@ app_lcore_worker(
 		ret = rte_ring_sc_dequeue_bulk(
 			ring_in,
 			(void **) lp->mbuf_in.array,
-			bsz_rd);
+			bsz_rd,
+			NULL);
 
 		if (unlikely(ret == 0))
 			continue;
diff --git a/examples/multi_process/client_server_mp/mp_client/client.c b/examples/multi_process/client_server_mp/mp_client/client.c
index dca9eb9..01b535c 100644
--- a/examples/multi_process/client_server_mp/mp_client/client.c
+++ b/examples/multi_process/client_server_mp/mp_client/client.c
@@ -279,7 +279,8 @@ main(int argc, char *argv[])
 		uint16_t i, rx_pkts;
 		uint8_t port;
 
-		rx_pkts = rte_ring_dequeue_burst(rx_ring, pkts, PKT_READ_SIZE);
+		rx_pkts = rte_ring_dequeue_burst(rx_ring, pkts,
+				PKT_READ_SIZE, NULL);
 
 		if (unlikely(rx_pkts == 0)){
 			if (need_flush)
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d268350..7719dad 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -462,7 +462,7 @@ worker_thread(void *args_ptr)
 
 		/* dequeue the mbufs from rx_to_workers ring */
 		burst_size = rte_ring_dequeue_burst(ring_in,
-				(void *)burst_buffer, MAX_PKTS_BURST);
+				(void *)burst_buffer, MAX_PKTS_BURST, NULL);
 		if (unlikely(burst_size == 0))
 			continue;
 
@@ -510,7 +510,7 @@ send_thread(struct send_thread_args *args)
 
 		/* deque the mbufs from workers_to_tx ring */
 		nb_dq_mbufs = rte_ring_dequeue_burst(args->ring_in,
-				(void *)mbufs, MAX_PKTS_BURST);
+				(void *)mbufs, MAX_PKTS_BURST, NULL);
 
 		if (unlikely(nb_dq_mbufs == 0))
 			continue;
@@ -595,7 +595,7 @@ tx_thread(struct rte_ring *ring_in)
 
 		/* deque the mbufs from workers_to_tx ring */
 		dqnum = rte_ring_dequeue_burst(ring_in,
-				(void *)mbufs, MAX_PKTS_BURST);
+				(void *)mbufs, MAX_PKTS_BURST, NULL);
 
 		if (unlikely(dqnum == 0))
 			continue;
diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index 0c81a15..15f117f 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -179,7 +179,7 @@ app_tx_thread(struct thread_conf **confs)
 
 	while ((conf = confs[conf_idx])) {
 		retval = rte_ring_sc_dequeue_bulk(conf->tx_ring, (void **)mbufs,
-					burst_conf.qos_dequeue);
+					burst_conf.qos_dequeue, NULL);
 		if (likely(retval != 0)) {
 			app_send_packets(conf, mbufs, burst_conf.qos_dequeue);
 
@@ -218,7 +218,7 @@ app_worker_thread(struct thread_conf **confs)
 
 		/* Read packet from the ring */
 		nb_pkt = rte_ring_sc_dequeue_burst(conf->rx_ring, (void **)mbufs,
-					burst_conf.ring_burst);
+					burst_conf.ring_burst, NULL);
 		if (likely(nb_pkt)) {
 			int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs,
 					nb_pkt);
@@ -254,7 +254,7 @@ app_mixed_thread(struct thread_conf **confs)
 
 		/* Read packet from the ring */
 		nb_pkt = rte_ring_sc_dequeue_burst(conf->rx_ring, (void **)mbufs,
-					burst_conf.ring_burst);
+					burst_conf.ring_burst, NULL);
 		if (likely(nb_pkt)) {
 			int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs,
 					nb_pkt);
diff --git a/examples/quota_watermark/qw/main.c b/examples/quota_watermark/qw/main.c
index 8fb7eb1..ef39053 100644
--- a/examples/quota_watermark/qw/main.c
+++ b/examples/quota_watermark/qw/main.c
@@ -243,7 +243,7 @@ pipeline_stage(__attribute__((unused)) void *args)
             }
 
             /* Dequeue up to quota mbuf from rx */
-            nb_dq_pkts = rte_ring_dequeue_burst(rx, pkts, *quota);
+            nb_dq_pkts = rte_ring_dequeue_burst(rx, pkts, *quota, NULL);
             if (unlikely(nb_dq_pkts < 0))
                 continue;
 
@@ -297,7 +297,8 @@ send_stage(__attribute__((unused)) void *args)
                 continue;
 
             /* Dequeue packets from tx and send them */
-            nb_dq_pkts = (uint16_t) rte_ring_dequeue_burst(tx, (void *) tx_pkts, *quota);
+            nb_dq_pkts = (uint16_t) rte_ring_dequeue_burst(tx, (void *) tx_pkts,
+        		    *quota, NULL);
             rte_eth_tx_burst(dest_port_id, 0, tx_pkts, nb_dq_pkts);
 
             /* TODO: Check if nb_dq_pkts == nb_tx_pkts? */
diff --git a/examples/server_node_efd/node/node.c b/examples/server_node_efd/node/node.c
index 9ec6a05..f780b92 100644
--- a/examples/server_node_efd/node/node.c
+++ b/examples/server_node_efd/node/node.c
@@ -392,7 +392,7 @@ main(int argc, char *argv[])
 		 */
 		while (rx_pkts > 0 &&
 				unlikely(rte_ring_dequeue_bulk(rx_ring, pkts,
-					rx_pkts) == 0))
+					rx_pkts, NULL) == 0))
 			rx_pkts = (uint16_t)RTE_MIN(rte_ring_count(rx_ring),
 					PKT_READ_SIZE);
 
diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c
index 6552199..645c0cf 100644
--- a/lib/librte_hash/rte_cuckoo_hash.c
+++ b/lib/librte_hash/rte_cuckoo_hash.c
@@ -536,7 +536,8 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
 		if (cached_free_slots->len == 0) {
 			/* Need to get another burst of free slots from global ring */
 			n_slots = rte_ring_mc_dequeue_burst(h->free_slots,
-					cached_free_slots->objs, LCORE_CACHE_SIZE);
+					cached_free_slots->objs,
+					LCORE_CACHE_SIZE, NULL);
 			if (n_slots == 0)
 				return -ENOSPC;
 
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
index 9b8fd2b..5c132bf 100644
--- a/lib/librte_mempool/rte_mempool_ring.c
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -58,14 +58,14 @@ static int
 common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
 	return rte_ring_mc_dequeue_bulk(mp->pool_data,
-			obj_table, n) == 0 ? -ENOBUFS : 0;
+			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
 }
 
 static int
 common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
 	return rte_ring_sc_dequeue_bulk(mp->pool_data,
-			obj_table, n) == 0 ? -ENOBUFS : 0;
+			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
 }
 
 static unsigned
diff --git a/lib/librte_port/rte_port_frag.c b/lib/librte_port/rte_port_frag.c
index 0fcace9..320407e 100644
--- a/lib/librte_port/rte_port_frag.c
+++ b/lib/librte_port/rte_port_frag.c
@@ -186,7 +186,8 @@ rte_port_ring_reader_frag_rx(void *port,
 		/* If "pkts" buffer is empty, read packet burst from ring */
 		if (p->n_pkts == 0) {
 			p->n_pkts = rte_ring_sc_dequeue_burst(p->ring,
-				(void **) p->pkts, RTE_PORT_IN_BURST_SIZE_MAX);
+				(void **) p->pkts, RTE_PORT_IN_BURST_SIZE_MAX,
+				NULL);
 			RTE_PORT_RING_READER_FRAG_STATS_PKTS_IN_ADD(p, p->n_pkts);
 			if (p->n_pkts == 0)
 				return n_pkts_out;
diff --git a/lib/librte_port/rte_port_ring.c b/lib/librte_port/rte_port_ring.c
index 9fadac7..492b0e7 100644
--- a/lib/librte_port/rte_port_ring.c
+++ b/lib/librte_port/rte_port_ring.c
@@ -111,7 +111,8 @@ rte_port_ring_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
 	struct rte_port_ring_reader *p = (struct rte_port_ring_reader *) port;
 	uint32_t nb_rx;
 
-	nb_rx = rte_ring_sc_dequeue_burst(p->ring, (void **) pkts, n_pkts);
+	nb_rx = rte_ring_sc_dequeue_burst(p->ring, (void **) pkts,
+			n_pkts, NULL);
 	RTE_PORT_RING_READER_STATS_PKTS_IN_ADD(p, nb_rx);
 
 	return nb_rx;
@@ -124,7 +125,8 @@ rte_port_ring_multi_reader_rx(void *port, struct rte_mbuf **pkts,
 	struct rte_port_ring_reader *p = (struct rte_port_ring_reader *) port;
 	uint32_t nb_rx;
 
-	nb_rx = rte_ring_mc_dequeue_burst(p->ring, (void **) pkts, n_pkts);
+	nb_rx = rte_ring_mc_dequeue_burst(p->ring, (void **) pkts,
+			n_pkts, NULL);
 	RTE_PORT_RING_READER_STATS_PKTS_IN_ADD(p, nb_rx);
 
 	return nb_rx;
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 2f8995c..b6123ba 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -497,7 +497,8 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 
 static inline unsigned int __attribute__((always_inline))
 __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
-		 unsigned n, enum rte_ring_queue_behavior behavior)
+		 unsigned int n, enum rte_ring_queue_behavior behavior,
+		 unsigned int *available)
 {
 	uint32_t cons_head, prod_tail;
 	uint32_t cons_next, entries;
@@ -506,11 +507,6 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 	unsigned int i;
 	uint32_t mask = r->mask;
 
-	/* Avoid the unnecessary cmpset operation below, which is also
-	 * potentially harmful when n equals 0. */
-	if (n == 0)
-		return 0;
-
 	/* move cons.head atomically */
 	do {
 		/* Restore n as it may change every loop */
@@ -525,15 +521,11 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 		entries = (prod_tail - cons_head);
 
 		/* Set the actual entries for dequeue */
-		if (n > entries) {
-			if (behavior == RTE_RING_QUEUE_FIXED)
-				return 0;
-			else {
-				if (unlikely(entries == 0))
-					return 0;
-				n = entries;
-			}
-		}
+		if (n > entries)
+			n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : entries;
+
+		if (unlikely(n == 0))
+			goto end;
 
 		cons_next = cons_head + n;
 		success = rte_atomic32_cmpset(&r->cons.head, cons_head,
@@ -552,7 +544,9 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 		rte_pause();
 
 	r->cons.tail = cons_next;
-
+end:
+	if (available != NULL)
+		*available = entries - n;
 	return n;
 }
 
@@ -581,7 +575,8 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
  */
 static inline unsigned int __attribute__((always_inline))
 __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
-		 unsigned n, enum rte_ring_queue_behavior behavior)
+		 unsigned int n, enum rte_ring_queue_behavior behavior,
+		 unsigned int *available)
 {
 	uint32_t cons_head, prod_tail;
 	uint32_t cons_next, entries;
@@ -596,15 +591,11 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 	 * and size(ring)-1. */
 	entries = prod_tail - cons_head;
 
-	if (n > entries) {
-		if (behavior == RTE_RING_QUEUE_FIXED)
-			return 0;
-		else {
-			if (unlikely(entries == 0))
-				return 0;
-			n = entries;
-		}
-	}
+	if (n > entries)
+		n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : entries;
+
+	if (unlikely(entries == 0))
+		goto end;
 
 	cons_next = cons_head + n;
 	r->cons.head = cons_next;
@@ -614,6 +605,9 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 	rte_smp_rmb();
 
 	r->cons.tail = cons_next;
+end:
+	if (available != NULL)
+		*available = entries - n;
 	return n;
 }
 
@@ -760,9 +754,11 @@ rte_ring_enqueue(struct rte_ring *r, void *obj)
  *   The number of objects dequeued, either 0 or n
  */
 static inline unsigned int __attribute__((always_inline))
-rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
+rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table,
+		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			available);
 }
 
 /**
@@ -779,9 +775,11 @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
  *   The number of objects dequeued, either 0 or n
  */
 static inline unsigned int __attribute__((always_inline))
-rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
+rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table,
+		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			available);
 }
 
 /**
@@ -801,12 +799,13 @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
  *   The number of objects dequeued, either 0 or n
  */
 static inline unsigned int __attribute__((always_inline))
-rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
+rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n,
+		unsigned int *available)
 {
 	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue_bulk(r, obj_table, n);
+		return rte_ring_sc_dequeue_bulk(r, obj_table, n, available);
 	else
-		return rte_ring_mc_dequeue_bulk(r, obj_table, n);
+		return rte_ring_mc_dequeue_bulk(r, obj_table, n, available);
 }
 
 /**
@@ -827,7 +826,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
 static inline int __attribute__((always_inline))
 rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
 {
-	return rte_ring_mc_dequeue_bulk(r, obj_p, 1)  ? 0 : -ENOBUFS;
+	return rte_ring_mc_dequeue_bulk(r, obj_p, 1, NULL)  ? 0 : -ENOBUFS;
 }
 
 /**
@@ -845,7 +844,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
 static inline int __attribute__((always_inline))
 rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
 {
-	return rte_ring_sc_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
+	return rte_ring_sc_dequeue_bulk(r, obj_p, 1, NULL) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -867,7 +866,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
 static inline int __attribute__((always_inline))
 rte_ring_dequeue(struct rte_ring *r, void **obj_p)
 {
-	return rte_ring_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
+	return rte_ring_dequeue_bulk(r, obj_p, 1, NULL) ? 0 : -ENOBUFS;
 }
 
 /**
@@ -1057,9 +1056,11 @@ rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,
  *   - n: Actual number of objects dequeued, 0 if ring is empty
  */
 static inline unsigned __attribute__((always_inline))
-rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
+rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table,
+		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+	return __rte_ring_mc_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, available);
 }
 
 /**
@@ -1077,9 +1078,11 @@ rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
  *   - n: Actual number of objects dequeued, 0 if ring is empty
  */
 static inline unsigned __attribute__((always_inline))
-rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
+rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table,
+		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+	return __rte_ring_sc_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, available);
 }
 
 /**
@@ -1099,12 +1102,13 @@ rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
  *   - Number of objects dequeued
  */
 static inline unsigned __attribute__((always_inline))
-rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
+rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table,
+		unsigned int n, unsigned int *available)
 {
 	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue_burst(r, obj_table, n);
+		return rte_ring_sc_dequeue_burst(r, obj_table, n, available);
 	else
-		return rte_ring_mc_dequeue_burst(r, obj_table, n);
+		return rte_ring_mc_dequeue_burst(r, obj_table, n, available);
 }
 
 #ifdef __cplusplus
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 14/19] ring: reduce scope of local variables
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (15 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 13/19] ring: allow dequeue fns to return remaining entry count Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 15/19] ring: separate out head index manipulation for enq/deq Bruce Richardson
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

The local variable i is only used for loop control so define it in
the enqueue and dequeue blocks directly, rather than at the function
level.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.h | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index b6123ba..6aabd7f 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -281,6 +281,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  * Placed here since identical code needed in both
  * single and multi producer enqueue functions */
 #define ENQUEUE_PTRS() do { \
+	unsigned int i; \
 	const uint32_t size = r->size; \
 	uint32_t idx = prod_head & mask; \
 	if (likely(idx + n < size)) { \
@@ -307,6 +308,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  * Placed here since identical code needed in both
  * single and multi consumer dequeue functions */
 #define DEQUEUE_PTRS() do { \
+	unsigned int i; \
 	uint32_t idx = cons_head & mask; \
 	const uint32_t size = r->size; \
 	if (likely(idx + n < size)) { \
@@ -361,7 +363,6 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	uint32_t cons_tail, free_entries;
 	const unsigned int max = n;
 	int success;
-	unsigned int i;
 	uint32_t mask = r->mask;
 
 	/* move prod.head atomically */
@@ -435,7 +436,6 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 {
 	uint32_t prod_head, cons_tail;
 	uint32_t prod_next, free_entries;
-	unsigned int i;
 	uint32_t mask = r->mask;
 
 	prod_head = r->prod.head;
@@ -504,7 +504,6 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
 	uint32_t cons_next, entries;
 	const unsigned max = n;
 	int success;
-	unsigned int i;
 	uint32_t mask = r->mask;
 
 	/* move cons.head atomically */
@@ -580,7 +579,6 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
 {
 	uint32_t cons_head, prod_tail;
 	uint32_t cons_next, entries;
-	unsigned int i;
 	uint32_t mask = r->mask;
 
 	cons_head = r->cons.head;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 15/19] ring: separate out head index manipulation for enq/deq
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (16 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 14/19] ring: reduce scope of local variables Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 16/19] ring: create common function for updating tail idx Bruce Richardson
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

We can write a single common function for head manipulation for enq
and a common one for deq, allowing us to have a single worker function
for enq and deq, rather than two of each. Update all other inline
functions to use the new functions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.c |   4 +-
 lib/librte_ring/rte_ring.h | 354 +++++++++++++++++++--------------------------
 2 files changed, 151 insertions(+), 207 deletions(-)

diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index e5af4ed..13887ab 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -138,8 +138,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
 	if (ret < 0 || ret >= (int)sizeof(r->name))
 		return -ENAMETOOLONG;
 	r->flags = flags;
-	r->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);
-	r->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);
+	r->prod.sp_enqueue = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP;
+	r->cons.sc_dequeue = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC;
 	r->size = count;
 	r->mask = count-1;
 	r->prod.head = r->cons.head = 0;
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 6aabd7f..0c6311a 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -165,6 +165,12 @@ struct rte_ring {
 #define RTE_RING_QUOT_EXCEED (1 << 31)  /**< Quota exceed for burst ops */
 #define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */
 
+/* @internal defines for passing to the enqueue dequeue worker functions */
+#define __IS_SP 1
+#define __IS_MP 0
+#define __IS_SC 1
+#define __IS_MC 0
+
 /**
  * Calculate the memory size needed for a ring
  *
@@ -283,7 +289,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
 #define ENQUEUE_PTRS() do { \
 	unsigned int i; \
 	const uint32_t size = r->size; \
-	uint32_t idx = prod_head & mask; \
+	uint32_t idx = prod_head & r->mask; \
 	if (likely(idx + n < size)) { \
 		for (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \
 			r->ring[idx] = obj_table[i]; \
@@ -309,7 +315,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
  * single and multi consumer dequeue functions */
 #define DEQUEUE_PTRS() do { \
 	unsigned int i; \
-	uint32_t idx = cons_head & mask; \
+	uint32_t idx = cons_head & r->mask; \
 	const uint32_t size = r->size; \
 	if (likely(idx + n < size)) { \
 		for (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\
@@ -332,87 +338,71 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
 } while (0)
 
 /**
- * @internal Enqueue several objects on the ring (multi-producers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * producer index atomically.
+ * @internal This function updates the producer head for enqueue
  *
  * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
+ *   A pointer to the ring structure
+ * @param is_sp
+ *   Indicates whether multi-producer path is needed or not
  * @param n
- *   The number of objects to add in the ring from the obj_table.
+ *   The number of elements we will want to enqueue, i.e. how far should the
+ *   head be moved
  * @param behavior
  *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring
+ *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring
+ * @param old_head
+ *   Returns head value as it was before the move, i.e. where enqueue starts
+ * @param new_head
+ *   Returns the current/new head value i.e. where enqueue finishes
+ * @param free_entries
+ *   Returns the amount of free space in the ring BEFORE head was moved
  * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects enqueue.
- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
- *   - n: Actual number of objects enqueued.
+ *   N - the number of elements that should be enqueued
  */
-static inline unsigned int __attribute__((always_inline))
-__rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
-			 unsigned int n, enum rte_ring_queue_behavior behavior,
-			 unsigned int *free_space)
+static inline __attribute__((always_inline)) unsigned int
+__rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
+		unsigned int n, enum rte_ring_queue_behavior behavior,
+		uint32_t *old_head, uint32_t *new_head,
+		uint32_t *free_entries)
 {
-	uint32_t prod_head, prod_next;
-	uint32_t cons_tail, free_entries;
-	const unsigned int max = n;
+	const uint32_t mask = r->mask;
+	unsigned int max = n;
 	int success;
-	uint32_t mask = r->mask;
 
-	/* move prod.head atomically */
 	do {
 		/* Reset n to the initial burst count */
 		n = max;
 
-		prod_head = r->prod.head;
-		cons_tail = r->cons.tail;
+		*old_head = r->prod.head;
+		const uint32_t cons_tail = r->cons.tail;
 		/* The subtraction is done between two unsigned 32bits value
 		 * (the result is always modulo 32 bits even if we have
-		 * prod_head > cons_tail). So 'free_entries' is always between 0
+		 * *old_head > cons_tail). So 'free_entries' is always between 0
 		 * and size(ring)-1. */
-		free_entries = (mask + cons_tail - prod_head);
+		*free_entries = (mask + cons_tail - *old_head);
 
 		/* check that we have enough room in ring */
-		if (unlikely(n > free_entries))
+		if (unlikely(n > *free_entries))
 			n = (behavior == RTE_RING_QUEUE_FIXED) ?
-					0 : free_entries;
+					0 : *free_entries;
 
 		if (n == 0)
-			goto end;
-
-		prod_next = prod_head + n;
-		success = rte_atomic32_cmpset(&r->prod.head, prod_head,
-					      prod_next);
+			return 0;
+
+		*new_head = *old_head + n;
+		if (is_sp)
+			r->prod.head = *new_head, success = 1;
+		else
+			success = rte_atomic32_cmpset(&r->prod.head,
+					*old_head, *new_head);
 	} while (unlikely(success == 0));
-
-	/* write entries in ring */
-	ENQUEUE_PTRS();
-	rte_smp_wmb();
-
-	/*
-	 * If there are other enqueues in progress that preceded us,
-	 * we need to wait for them to complete
-	 */
-	while (unlikely(r->prod.tail != prod_head))
-		rte_pause();
-
-	r->prod.tail = prod_next;
-end:
-	if (free_space != NULL)
-		*free_space = free_entries - n;
 	return n;
 }
 
 /**
- * @internal Enqueue several objects on a ring (NOT multi-producers safe).
+ * @internal Enqueue several objects on the ring
  *
- * @param r
+  * @param r
  *   A pointer to the ring structure.
  * @param obj_table
  *   A pointer to a table of void * pointers (objects).
@@ -420,48 +410,39 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
  *   The number of objects to add in the ring from the obj_table.
  * @param behavior
  *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring
+ *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring
+ * @param is_sp
+ *   Indicates whether to use single producer or multi-producer head update
+ * @param free_space
+ *   returns the amount of space after the enqueue operation has finished
  * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects enqueue.
- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects enqueued.
  */
-static inline unsigned int __attribute__((always_inline))
-__rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
-			 unsigned int n, enum rte_ring_queue_behavior behavior,
-			 unsigned int *free_space)
+static inline __attribute__((always_inline)) unsigned int
+__rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table,
+		 unsigned int n, enum rte_ring_queue_behavior behavior,
+		 int is_sp, unsigned int *free_space)
 {
-	uint32_t prod_head, cons_tail;
-	uint32_t prod_next, free_entries;
-	uint32_t mask = r->mask;
-
-	prod_head = r->prod.head;
-	cons_tail = r->cons.tail;
-	/* The subtraction is done between two unsigned 32bits value
-	 * (the result is always modulo 32 bits even if we have
-	 * prod_head > cons_tail). So 'free_entries' is always between 0
-	 * and size(ring)-1. */
-	free_entries = mask + cons_tail - prod_head;
-
-	/* check that we have enough room in ring */
-	if (unlikely(n > free_entries))
-		n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : free_entries;
+	uint32_t prod_head, prod_next;
+	uint32_t free_entries;
 
+	n = __rte_ring_move_prod_head(r, is_sp, n, behavior,
+			&prod_head, &prod_next, &free_entries);
 	if (n == 0)
 		goto end;
 
-
-	prod_next = prod_head + n;
-	r->prod.head = prod_next;
-
-	/* write entries in ring */
 	ENQUEUE_PTRS();
 	rte_smp_wmb();
 
+	/*
+	 * If there are other enqueues in progress that preceded us,
+	 * we need to wait for them to complete
+	 */
+	while (unlikely(r->prod.tail != prod_head))
+		rte_pause();
+
 	r->prod.tail = prod_next;
+
 end:
 	if (free_space != NULL)
 		*free_space = free_entries - n;
@@ -469,140 +450,110 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
 }
 
 /**
- * @internal Dequeue several objects from a ring (multi-consumers safe). When
- * the request objects are more than the available objects, only dequeue the
- * actual number of objects
- *
- * This function uses a "compare and set" instruction to move the
- * consumer index atomically.
+ * @internal This function updates the consumer head for dequeue
  *
  * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
+ *   A pointer to the ring structure
+ * @param is_sc
+ *   Indicates whether multi-consumer path is needed or not
  * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
+ *   The number of elements we will want to enqueue, i.e. how far should the
+ *   head be moved
  * @param behavior
  *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring
+ *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring
+ * @param old_head
+ *   Returns head value as it was before the move, i.e. where dequeue starts
+ * @param new_head
+ *   Returns the current/new head value i.e. where dequeue finishes
+ * @param entries
+ *   Returns the number of entries in the ring BEFORE head was moved
  * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
- *   - n: Actual number of objects dequeued.
+ *   N - the number of elements that should be enqueued
  */
-
-static inline unsigned int __attribute__((always_inline))
-__rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
-		 unsigned int n, enum rte_ring_queue_behavior behavior,
-		 unsigned int *available)
+static inline __attribute__((always_inline)) unsigned int
+__rte_ring_move_cons_head(struct rte_ring *r, int is_sc,
+		unsigned int n, enum rte_ring_queue_behavior behavior,
+		uint32_t *old_head, uint32_t *new_head,
+		uint32_t *entries)
 {
-	uint32_t cons_head, prod_tail;
-	uint32_t cons_next, entries;
-	const unsigned max = n;
+	unsigned int max = n;
 	int success;
-	uint32_t mask = r->mask;
 
 	/* move cons.head atomically */
 	do {
 		/* Restore n as it may change every loop */
 		n = max;
 
-		cons_head = r->cons.head;
-		prod_tail = r->prod.tail;
+		*old_head = r->cons.head;
+		const uint32_t prod_tail = r->prod.tail;
 		/* The subtraction is done between two unsigned 32bits value
 		 * (the result is always modulo 32 bits even if we have
 		 * cons_head > prod_tail). So 'entries' is always between 0
 		 * and size(ring)-1. */
-		entries = (prod_tail - cons_head);
+		*entries = (prod_tail - *old_head);
 
 		/* Set the actual entries for dequeue */
-		if (n > entries)
-			n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : entries;
+		if (n > *entries)
+			n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : *entries;
 
 		if (unlikely(n == 0))
-			goto end;
-
-		cons_next = cons_head + n;
-		success = rte_atomic32_cmpset(&r->cons.head, cons_head,
-					      cons_next);
+			return 0;
+
+		*new_head = *old_head + n;
+		if (is_sc)
+			r->cons.head = *new_head, success = 1;
+		else
+			success = rte_atomic32_cmpset(&r->cons.head, *old_head,
+					*new_head);
 	} while (unlikely(success == 0));
-
-	/* copy in table */
-	DEQUEUE_PTRS();
-	rte_smp_rmb();
-
-	/*
-	 * If there are other dequeues in progress that preceded us,
-	 * we need to wait for them to complete
-	 */
-	while (unlikely(r->cons.tail != cons_head))
-		rte_pause();
-
-	r->cons.tail = cons_next;
-end:
-	if (available != NULL)
-		*available = entries - n;
 	return n;
 }
 
 /**
- * @internal Dequeue several objects from a ring (NOT multi-consumers safe).
- * When the request objects are more than the available objects, only dequeue
- * the actual number of objects
+ * @internal Dequeue several objects from the ring
  *
  * @param r
  *   A pointer to the ring structure.
  * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
+ *   A pointer to a table of void * pointers (objects).
  * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
+ *   The number of objects to pull from the ring.
  * @param behavior
  *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring
+ *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring
+ * @param is_sc
+ *   Indicates whether to use single consumer or multi-consumer head update
+ * @param available
+ *   returns the number of remaining ring entries after the dequeue has finished
  * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
  *   - n: Actual number of objects dequeued.
  */
-static inline unsigned int __attribute__((always_inline))
-__rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
+static inline __attribute__((always_inline)) unsigned int
+__rte_ring_do_dequeue(struct rte_ring *r, void **obj_table,
 		 unsigned int n, enum rte_ring_queue_behavior behavior,
-		 unsigned int *available)
+		 int is_mp, unsigned int *available)
 {
-	uint32_t cons_head, prod_tail;
-	uint32_t cons_next, entries;
-	uint32_t mask = r->mask;
-
-	cons_head = r->cons.head;
-	prod_tail = r->prod.tail;
-	/* The subtraction is done between two unsigned 32bits value
-	 * (the result is always modulo 32 bits even if we have
-	 * cons_head > prod_tail). So 'entries' is always between 0
-	 * and size(ring)-1. */
-	entries = prod_tail - cons_head;
-
-	if (n > entries)
-		n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : entries;
-
-	if (unlikely(entries == 0))
-		goto end;
+	uint32_t cons_head, cons_next;
+	uint32_t entries;
 
-	cons_next = cons_head + n;
-	r->cons.head = cons_next;
+	n = __rte_ring_move_cons_head(r, is_mp, n, behavior,
+			&cons_head, &cons_next, &entries);
+	if (n == 0)
+		goto end;
 
-	/* copy in table */
 	DEQUEUE_PTRS();
 	rte_smp_rmb();
 
+	/*
+	 * If there are other enqueues in progress that preceded us,
+	 * we need to wait for them to complete
+	 */
+	while (unlikely(r->cons.tail != cons_head))
+		rte_pause();
+
 	r->cons.tail = cons_next;
+
 end:
 	if (available != NULL)
 		*available = entries - n;
@@ -628,8 +579,8 @@ static inline unsigned int __attribute__((always_inline))
 rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
-			free_space);
+	return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			__IS_MP, free_space);
 }
 
 /**
@@ -648,8 +599,8 @@ static inline unsigned int __attribute__((always_inline))
 rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
-			free_space);
+	return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			__IS_SP, free_space);
 }
 
 /**
@@ -672,10 +623,8 @@ static inline unsigned int __attribute__((always_inline))
 rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
 		      unsigned int n, unsigned int *free_space)
 {
-	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue_bulk(r, obj_table, n, free_space);
-	else
-		return rte_ring_mp_enqueue_bulk(r, obj_table, n, free_space);
+	return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			r->prod.sp_enqueue, free_space);
 }
 
 /**
@@ -755,8 +704,8 @@ static inline unsigned int __attribute__((always_inline))
 rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table,
 		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
-			available);
+	return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			__IS_MC, available);
 }
 
 /**
@@ -776,8 +725,8 @@ static inline unsigned int __attribute__((always_inline))
 rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table,
 		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
-			available);
+	return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+			__IS_SC, available);
 }
 
 /**
@@ -800,10 +749,8 @@ static inline unsigned int __attribute__((always_inline))
 rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n,
 		unsigned int *available)
 {
-	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue_bulk(r, obj_table, n, available);
-	else
-		return rte_ring_mc_dequeue_bulk(r, obj_table, n, available);
+	return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED,
+				r->cons.sc_dequeue, available);
 }
 
 /**
@@ -986,8 +933,8 @@ static inline unsigned __attribute__((always_inline))
 rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
 			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_mp_do_enqueue(r, obj_table, n,
-			RTE_RING_QUEUE_VARIABLE, free_space);
+	return __rte_ring_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space);
 }
 
 /**
@@ -1006,8 +953,8 @@ static inline unsigned __attribute__((always_inline))
 rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
 			 unsigned int n, unsigned int *free_space)
 {
-	return __rte_ring_sp_do_enqueue(r, obj_table, n,
-			RTE_RING_QUEUE_VARIABLE, free_space);
+	return __rte_ring_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space);
 }
 
 /**
@@ -1030,10 +977,8 @@ static inline unsigned __attribute__((always_inline))
 rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,
 		      unsigned int n, unsigned int *free_space)
 {
-	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue_burst(r, obj_table, n, free_space);
-	else
-		return rte_ring_mp_enqueue_burst(r, obj_table, n, free_space);
+	return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE,
+			r->prod.sp_enqueue, free_space);
 }
 
 /**
@@ -1057,8 +1002,8 @@ static inline unsigned __attribute__((always_inline))
 rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table,
 		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_mc_do_dequeue(r, obj_table, n,
-			RTE_RING_QUEUE_VARIABLE, available);
+	return __rte_ring_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_MC, available);
 }
 
 /**
@@ -1079,8 +1024,8 @@ static inline unsigned __attribute__((always_inline))
 rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table,
 		unsigned int n, unsigned int *available)
 {
-	return __rte_ring_sc_do_dequeue(r, obj_table, n,
-			RTE_RING_QUEUE_VARIABLE, available);
+	return __rte_ring_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_SC, available);
 }
 
 /**
@@ -1103,10 +1048,9 @@ static inline unsigned __attribute__((always_inline))
 rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table,
 		unsigned int n, unsigned int *available)
 {
-	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue_burst(r, obj_table, n, available);
-	else
-		return rte_ring_mc_dequeue_burst(r, obj_table, n, available);
+	return __rte_ring_do_dequeue(r, obj_table, n,
+				RTE_RING_QUEUE_VARIABLE,
+				r->cons.sc_dequeue, available);
 }
 
 #ifdef __cplusplus
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 16/19] ring: create common function for updating tail idx
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (17 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 15/19] ring: separate out head index manipulation for enq/deq Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 17/19] ring: allow macros to work with any type of object Bruce Richardson
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Both producer and consumer use the same logic for updating the tail
index so merge into a single function.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 0c6311a..4a58857 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -337,6 +337,19 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
 	} \
 } while (0)
 
+static inline __attribute__((always_inline)) void
+update_tail(struct rte_ring_ht_ptr *ht_ptr, uint32_t old_val, uint32_t new_val)
+{
+	/*
+	 * If there are other enqueues/dequeues in progress that preceded us,
+	 * we need to wait for them to complete
+	 */
+	while (unlikely(ht_ptr->tail != old_val))
+		rte_pause();
+
+	ht_ptr->tail = new_val;
+}
+
 /**
  * @internal This function updates the producer head for enqueue
  *
@@ -434,15 +447,7 @@ __rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	ENQUEUE_PTRS();
 	rte_smp_wmb();
 
-	/*
-	 * If there are other enqueues in progress that preceded us,
-	 * we need to wait for them to complete
-	 */
-	while (unlikely(r->prod.tail != prod_head))
-		rte_pause();
-
-	r->prod.tail = prod_next;
-
+	update_tail(&r->prod, prod_head, prod_next);
 end:
 	if (free_space != NULL)
 		*free_space = free_entries - n;
@@ -545,14 +550,7 @@ __rte_ring_do_dequeue(struct rte_ring *r, void **obj_table,
 	DEQUEUE_PTRS();
 	rte_smp_rmb();
 
-	/*
-	 * If there are other enqueues in progress that preceded us,
-	 * we need to wait for them to complete
-	 */
-	while (unlikely(r->cons.tail != cons_head))
-		rte_pause();
-
-	r->cons.tail = cons_next;
+	update_tail(&r->cons, cons_head, cons_next);
 
 end:
 	if (available != NULL)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 17/19] ring: allow macros to work with any type of object
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (18 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 16/19] ring: create common function for updating tail idx Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 18/19] ring: add object size parameter to memory size calculation Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 19/19] ring: add event ring implementation Bruce Richardson
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Modify the enqueue and dequeue macros to support copying any type of
object by passing in the exact object type. We no longer need a
placeholder element array in the ring structure, since the macros just
take the address of the end of the structure, so remove it, leaving
the rte_ring structure as a ring header only.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.h | 68 ++++++++++++++++++++++++----------------------
 1 file changed, 36 insertions(+), 32 deletions(-)

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 4a58857..d708c90 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -154,11 +154,7 @@ struct rte_ring {
 
 	/** Ring consumer status. */
 	struct rte_ring_ht_ptr cons __rte_aligned(RTE_CACHE_LINE_SIZE * 2);
-
-	void *ring[] __rte_cache_aligned;   /**< Memory space of ring starts here.
-	                                     * not volatile so need to be careful
-	                                     * about compiler re-ordering */
-};
+} __rte_cache_aligned;
 
 #define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
 #define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
@@ -286,54 +282,62 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r);
 /* the actual enqueue of pointers on the ring.
  * Placed here since identical code needed in both
  * single and multi producer enqueue functions */
-#define ENQUEUE_PTRS() do { \
+#define ENQUEUE_PTRS(r, obj_table, n, prod_head, obj_type) do { \
 	unsigned int i; \
-	const uint32_t size = r->size; \
-	uint32_t idx = prod_head & r->mask; \
+	const uint32_t size = (r)->size; \
+	uint32_t idx = prod_head & (r)->mask; \
+	obj_type *ring = (void *)&(r)[1]; \
 	if (likely(idx + n < size)) { \
 		for (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \
-			r->ring[idx] = obj_table[i]; \
-			r->ring[idx+1] = obj_table[i+1]; \
-			r->ring[idx+2] = obj_table[i+2]; \
-			r->ring[idx+3] = obj_table[i+3]; \
+			ring[idx] = obj_table[i]; \
+			ring[idx+1] = obj_table[i+1]; \
+			ring[idx+2] = obj_table[i+2]; \
+			ring[idx+3] = obj_table[i+3]; \
 		} \
 		switch (n & 0x3) { \
-			case 3: r->ring[idx++] = obj_table[i++]; \
-			case 2: r->ring[idx++] = obj_table[i++]; \
-			case 1: r->ring[idx++] = obj_table[i++]; \
+		case 3: \
+			ring[idx++] = obj_table[i++]; /* fallthrough */ \
+		case 2: \
+			ring[idx++] = obj_table[i++]; /* fallthrough */ \
+		case 1: \
+			ring[idx++] = obj_table[i++]; \
 		} \
 	} else { \
 		for (i = 0; idx < size; i++, idx++)\
-			r->ring[idx] = obj_table[i]; \
+			ring[idx] = obj_table[i]; \
 		for (idx = 0; i < n; i++, idx++) \
-			r->ring[idx] = obj_table[i]; \
+			ring[idx] = obj_table[i]; \
 	} \
-} while(0)
+} while (0)
 
 /* the actual copy of pointers on the ring to obj_table.
  * Placed here since identical code needed in both
  * single and multi consumer dequeue functions */
-#define DEQUEUE_PTRS() do { \
+#define DEQUEUE_PTRS(r, obj_table, n, cons_head, obj_type) do { \
 	unsigned int i; \
-	uint32_t idx = cons_head & r->mask; \
-	const uint32_t size = r->size; \
+	uint32_t idx = cons_head & (r)->mask; \
+	const uint32_t size = (r)->size; \
+	obj_type *ring = (void *)&(r)[1]; \
 	if (likely(idx + n < size)) { \
 		for (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\
-			obj_table[i] = r->ring[idx]; \
-			obj_table[i+1] = r->ring[idx+1]; \
-			obj_table[i+2] = r->ring[idx+2]; \
-			obj_table[i+3] = r->ring[idx+3]; \
+			obj_table[i] = ring[idx]; \
+			obj_table[i+1] = ring[idx+1]; \
+			obj_table[i+2] = ring[idx+2]; \
+			obj_table[i+3] = ring[idx+3]; \
 		} \
 		switch (n & 0x3) { \
-			case 3: obj_table[i++] = r->ring[idx++]; \
-			case 2: obj_table[i++] = r->ring[idx++]; \
-			case 1: obj_table[i++] = r->ring[idx++]; \
+		case 3: \
+			obj_table[i++] = ring[idx++]; /* fallthrough */ \
+		case 2: \
+			obj_table[i++] = ring[idx++]; /* fallthrough */ \
+		case 1: \
+			obj_table[i++] = ring[idx++]; \
 		} \
 	} else { \
 		for (i = 0; idx < size; i++, idx++) \
-			obj_table[i] = r->ring[idx]; \
+			obj_table[i] = ring[idx]; \
 		for (idx = 0; i < n; i++, idx++) \
-			obj_table[i] = r->ring[idx]; \
+			obj_table[i] = ring[idx]; \
 	} \
 } while (0)
 
@@ -444,7 +448,7 @@ __rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table,
 	if (n == 0)
 		goto end;
 
-	ENQUEUE_PTRS();
+	ENQUEUE_PTRS(r, obj_table, n, prod_head, void *);
 	rte_smp_wmb();
 
 	update_tail(&r->prod, prod_head, prod_next);
@@ -547,7 +551,7 @@ __rte_ring_do_dequeue(struct rte_ring *r, void **obj_table,
 	if (n == 0)
 		goto end;
 
-	DEQUEUE_PTRS();
+	DEQUEUE_PTRS(r, obj_table, n, cons_head, void *);
 	rte_smp_rmb();
 
 	update_tail(&r->cons, cons_head, cons_next);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 18/19] ring: add object size parameter to memory size calculation
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (19 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 17/19] ring: allow macros to work with any type of object Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  2017-02-07 14:12 ` [PATCH RFCv3 19/19] ring: add event ring implementation Bruce Richardson
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

Add in an extra parameter for the ring element size to the function
which calculates the amount of memory needed for a ring.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ring/rte_ring.c | 6 +++---
 lib/librte_ring/rte_ring.h | 5 ++++-
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index 13887ab..eb2a96d 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -101,7 +101,7 @@ EAL_REGISTER_TAILQ(rte_ring_tailq)
 
 /* return the size of memory occupied by a ring */
 ssize_t
-rte_ring_get_memsize(unsigned count)
+rte_ring_get_memsize(unsigned int count, size_t obj_size)
 {
 	ssize_t sz;
 
@@ -113,7 +113,7 @@ rte_ring_get_memsize(unsigned count)
 		return -EINVAL;
 	}
 
-	sz = sizeof(struct rte_ring) + count * sizeof(void *);
+	sz = sizeof(struct rte_ring) + (count * obj_size);
 	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
 	return sz;
 }
@@ -164,7 +164,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
 
 	ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
 
-	ring_size = rte_ring_get_memsize(count);
+	ring_size = rte_ring_get_memsize(count, sizeof(void *));
 	if (ring_size < 0) {
 		rte_errno = ring_size;
 		return NULL;
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index d708c90..9d5eade 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -177,11 +177,14 @@ struct rte_ring {
  *
  * @param count
  *   The number of elements in the ring (must be a power of 2).
+ * @param obj_size
+ *   The size of the objects to be stored in the ring, normally for
+ *   rte_rings this should be sizeof(void *)
  * @return
  *   - The memory size needed for the ring on success.
  *   - -EINVAL if count is not a power of 2.
  */
-ssize_t rte_ring_get_memsize(unsigned count);
+ssize_t rte_ring_get_memsize(unsigned int count, size_t obj_size);
 
 /**
  * Initialize a ring structure.
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFCv3 19/19] ring: add event ring implementation
  2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
                   ` (20 preceding siblings ...)
  2017-02-07 14:12 ` [PATCH RFCv3 18/19] ring: add object size parameter to memory size calculation Bruce Richardson
@ 2017-02-07 14:12 ` Bruce Richardson
  21 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-07 14:12 UTC (permalink / raw)
  To: olivier.matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev,
	Bruce Richardson

DEMO ONLY: add an event ring implementation to demonstrate how rings
for new types can be efficiently added by reusing existing rte ring
functions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/Makefile                |   1 +
 app/test/test_event_ring.c       |  85 +++++++
 lib/librte_ring/Makefile         |   2 +
 lib/librte_ring/rte_event_ring.c | 220 +++++++++++++++++
 lib/librte_ring/rte_event_ring.h | 507 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 815 insertions(+)
 create mode 100644 app/test/test_event_ring.c
 create mode 100644 lib/librte_ring/rte_event_ring.c
 create mode 100644 lib/librte_ring/rte_event_ring.h

diff --git a/app/test/Makefile b/app/test/Makefile
index 9de301f..c0ab2ba 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -96,6 +96,7 @@ SRCS-y += test_memory.c
 SRCS-y += test_memzone.c
 
 SRCS-y += test_ring.c
+SRCS-y += test_event_ring.c
 SRCS-y += test_ring_perf.c
 SRCS-y += test_pmd_perf.c
 
diff --git a/app/test/test_event_ring.c b/app/test/test_event_ring.c
new file mode 100644
index 0000000..5ac4bac
--- /dev/null
+++ b/app/test/test_event_ring.c
@@ -0,0 +1,85 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_random.h>
+#include <rte_event_ring.h>
+#include "test.h"
+
+#define BURST_SIZE 24 /* not a power of two so we can test wrap around better */
+#define RING_SIZE 128
+#define ENQ_ITERATIONS 48
+
+#define ERR_OUT() do { \
+	printf("Error %s:%d\n", __FILE__, __LINE__); \
+	return -1; \
+} while (0)
+
+static int
+test_event_ring(void)
+{
+	struct rte_event in_events[BURST_SIZE];
+	struct rte_event out_events[BURST_SIZE];
+	unsigned int i;
+
+	struct rte_event_ring *ering = rte_event_ring_create("TEST_RING",
+			RING_SIZE, rte_socket_id(),
+			RING_F_SP_ENQ|RING_F_SC_DEQ);
+	if (ering == NULL)
+		ERR_OUT();
+
+	for (i = 0; i < BURST_SIZE; i++)
+		in_events[i].event_metadata = rte_rand();
+
+	for (i = 0; i < ENQ_ITERATIONS; i++)
+		if (rte_event_ring_enqueue_bulk(ering, in_events,
+				RTE_DIM(in_events), NULL) != 0) {
+			unsigned j;
+
+			if (rte_event_ring_dequeue_burst(ering, out_events,
+					RTE_DIM(out_events), NULL)
+						!= RTE_DIM(out_events))
+				ERR_OUT();
+			for (j = 0; j < RTE_DIM(out_events); j++)
+				if (out_events[j].event_metadata !=
+						in_events[j].event_metadata)
+					ERR_OUT();
+			/* retry, now that we've made space */
+			if (rte_event_ring_enqueue_bulk(ering, in_events,
+					RTE_DIM(in_events), NULL) != 0)
+				ERR_OUT();
+		}
+
+	return 0;
+}
+
+REGISTER_TEST_COMMAND(event_ring_autotest, test_event_ring);
diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile
index 4b1112e..309960f 100644
--- a/lib/librte_ring/Makefile
+++ b/lib/librte_ring/Makefile
@@ -42,9 +42,11 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c
+SRCS-$(CONFIG_RTE_LIBRTE_RING) += rte_event_ring.c
 
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include += rte_event_ring.h
 
 DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/librte_eal
 
diff --git a/lib/librte_ring/rte_event_ring.c b/lib/librte_ring/rte_event_ring.c
new file mode 100644
index 0000000..f6875b0
--- /dev/null
+++ b/lib/librte_ring/rte_event_ring.c
@@ -0,0 +1,220 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_tailq.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_rwlock.h>
+
+#include "rte_event_ring.h"
+
+#define RTE_TAILQ_EVENT_RING_NAME "RTE_EVENT_RING"
+
+TAILQ_HEAD(rte_event_ring_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_event_ring_tailq = {
+	.name = RTE_TAILQ_EVENT_RING_NAME,
+};
+EAL_REGISTER_TAILQ(rte_event_ring_tailq)
+
+int
+rte_event_ring_init(struct rte_event_ring *r, const char *name,
+		unsigned int count, unsigned int flags)
+{
+	return rte_ring_init(&r->hdr, name, count, flags);
+}
+
+
+struct rte_event_ring *
+rte_event_ring_create(const char *name, unsigned count, int socket_id,
+		unsigned int flags)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	struct rte_event_ring *r;
+	struct rte_tailq_entry *te;
+	const struct rte_memzone *mz;
+	ssize_t ring_size;
+	int mz_flags = 0;
+	struct rte_event_ring_list *ring_list = NULL;
+	int ret;
+
+	ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+			rte_event_ring_list);
+
+	ring_size = rte_ring_get_memsize(count, sizeof(struct rte_event));
+	if (ring_size < 0) {
+		rte_errno = ring_size;
+		return NULL;
+	}
+
+	ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+		RTE_RING_MZ_PREFIX, name);
+	if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+		rte_errno = ENAMETOOLONG;
+		return NULL;
+	}
+
+	te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	/* reserve a memory zone for this ring. If we can't get rte_config or
+	 * we are secondary process, the memzone_reserve function will set
+	 * rte_errno for us appropriately - hence no check in this this function */
+	mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
+	if (mz != NULL) {
+		r = mz->addr;
+		/* no need to check return value here, we already checked the
+		 * arguments above */
+		rte_event_ring_init(r, name, count, flags);
+
+		te->data = (void *) r;
+		r->hdr.memzone = mz;
+
+		TAILQ_INSERT_TAIL(ring_list, te, next);
+	} else {
+		r = NULL;
+		RTE_LOG(ERR, RING, "Cannot reserve memory\n");
+		rte_free(te);
+	}
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	return r;
+}
+
+void
+rte_event_ring_free(struct rte_event_ring *r)
+{
+	struct rte_event_ring_list *ring_list = NULL;
+	struct rte_tailq_entry *te;
+
+	if (r == NULL)
+		return;
+
+	/*
+	 * Ring was not created with rte_ring_create,
+	 * therefore, there is no memzone to free.
+	 */
+	if (r->hdr.memzone == NULL) {
+		RTE_LOG(ERR, RING,
+			"Cannot free ring (not created with rte_ring_create()");
+		return;
+	}
+
+	if (rte_memzone_free(r->hdr.memzone) != 0) {
+		RTE_LOG(ERR, RING, "Cannot free memory\n");
+		return;
+	}
+
+	ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+			rte_event_ring_list);
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	/* find out tailq entry */
+	TAILQ_FOREACH(te, ring_list, next) {
+		if (te->data == (void *) r)
+			break;
+	}
+
+	if (te == NULL) {
+		rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+		return;
+	}
+
+	TAILQ_REMOVE(ring_list, te, next);
+
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	rte_free(te);
+}
+
+void
+rte_event_ring_dump(FILE *f, const struct rte_event_ring *r)
+{
+	return rte_ring_dump(f, &r->hdr);
+}
+
+void
+rte_event_ring_list_dump(FILE *f)
+{
+	const struct rte_tailq_entry *te;
+	struct rte_event_ring_list *ring_list;
+
+	ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+			rte_event_ring_list);
+
+	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	TAILQ_FOREACH(te, ring_list, next) {
+		rte_event_ring_dump(f, te->data);
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+}
+
+struct rte_event_ring *
+rte_event_ring_lookup(const char *name)
+{
+	struct rte_tailq_entry *te;
+	struct rte_event_ring *r = NULL;
+	struct rte_event_ring_list *ring_list;
+
+	ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+			rte_event_ring_list);
+
+	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	TAILQ_FOREACH(te, ring_list, next) {
+		r = te->data;
+		if (strncmp(name, r->hdr.name, RTE_RING_NAMESIZE) == 0)
+			break;
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	if (te == NULL) {
+		rte_errno = ENOENT;
+		return NULL;
+	}
+
+	return r;
+}
diff --git a/lib/librte_ring/rte_event_ring.h b/lib/librte_ring/rte_event_ring.h
new file mode 100644
index 0000000..f34b321
--- /dev/null
+++ b/lib/librte_ring/rte_event_ring.h
@@ -0,0 +1,507 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENT_RING_H_
+#define _RTE_EVENT_RING_H_
+
+/**
+ * @file
+ * RTE Event Ring
+ *
+ * The Ring Manager is a fixed-size event queue, implemented as a table of
+ * events. Head and tail pointers are modified atomically, allowing
+ * concurrent access to it. It has the following features:
+ *
+ * - FIFO (First In First Out)
+ * - Maximum size is fixed; the pointers are stored in a table.
+ * - Lockless implementation.
+ * - Multi- or single-consumer dequeue.
+ * - Multi- or single-producer enqueue.
+ * - Bulk dequeue.
+ * - Bulk enqueue.
+ *
+ * Note: the ring implementation is not preemptable. A lcore must not
+ * be interrupted by another task that uses the same ring.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "rte_ring.h"
+
+struct rte_event {
+	uint64_t event_metadata;
+	struct rte_mbuf *mbuf;
+};
+
+struct rte_event_ring {
+	struct rte_ring hdr;
+};
+
+/**
+ * Initialize a ring structure.
+ *
+ * Initialize a ring structure in memory pointed by "r". The size of the
+ * memory area must be large enough to store the ring structure and the
+ * object table. It is advised to use rte_event_ring_get_memsize() to get the
+ * appropriate size.
+ *
+ * The ring size is set to *count*, which must be a power of two. Water
+ * marking is disabled by default. The real usable ring size is
+ * *count-1* instead of *count* to differentiate a free ring from an
+ * empty ring.
+ *
+ * The ring is not added in RTE_TAILQ_RING global list. Indeed, the
+ * memory given by the caller may not be shareable among dpdk
+ * processes.
+ *
+ * @param r
+ *   The pointer to the ring structure followed by the objects table.
+ * @param name
+ *   The name of the ring.
+ * @param count
+ *   The number of elements in the ring (must be a power of 2).
+ * @param flags
+ *   An OR of the following:
+ *    - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ *      using ``rte_event_ring_enqueue()`` or ``rte_event_ring_enqueue_bulk()``
+ *      is "single-producer". Otherwise, it is "multi-producers".
+ *    - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ *      using ``rte_event_ring_dequeue()`` or ``rte_event_ring_dequeue_bulk()``
+ *      is "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   0 on success, or a negative value on error.
+ */
+int rte_event_ring_init(struct rte_event_ring *r, const char *name,
+		unsigned int count, unsigned int flags);
+
+/**
+ * Create a new ring named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory. Then it
+ * calls rte_event_ring_init() to initialize an empty ring.
+ *
+ * The new ring size is set to *count*, which must be a power of
+ * two. Water marking is disabled by default. The real usable ring size
+ * is *count-1* instead of *count* to differentiate a free ring from an
+ * empty ring.
+ *
+ * The ring is added in RTE_TAILQ_RING list.
+ *
+ * @param name
+ *   The name of the ring.
+ * @param count
+ *   The size of the ring (must be a power of 2).
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   An OR of the following:
+ *    - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ *      using ``rte_event_ring_enqueue()`` or ``rte_event_ring_enqueue_bulk()``
+ *      is "single-producer". Otherwise, it is "multi-producers".
+ *    - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ *      using ``rte_event_ring_dequeue()`` or ``rte_event_ring_dequeue_bulk()``
+ *      is "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   On success, the pointer to the new allocated ring. NULL on error with
+ *    rte_errno set appropriately. Possible errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - EINVAL - count provided is not a power of 2
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_event_ring *rte_event_ring_create(const char *name, unsigned count,
+				 int socket_id, unsigned flags);
+/**
+ * De-allocate all memory used by the ring.
+ *
+ * @param r
+ *   Ring to free
+ */
+void rte_event_ring_free(struct rte_event_ring *r);
+
+/**
+ * Dump the status of the ring to a file.
+ *
+ * @param f
+ *   A pointer to a file for output
+ * @param r
+ *   A pointer to the ring structure.
+ */
+void rte_event_ring_dump(FILE *f, const struct rte_event_ring *r);
+
+/**
+ * Test if a ring is full.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   - 1: The ring is full.
+ *   - 0: The ring is not full.
+ */
+static inline int
+rte_event_ring_full(const struct rte_event_ring *r)
+{
+	return rte_ring_full(&r->hdr);
+}
+
+/**
+ * Test if a ring is empty.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   - 1: The ring is empty.
+ *   - 0: The ring is not empty.
+ */
+static inline int
+rte_event_ring_empty(const struct rte_event_ring *r)
+{
+	return rte_ring_empty(&r->hdr);
+}
+
+/**
+ * Return the number of entries in a ring.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   The number of entries in the ring.
+ */
+static inline unsigned
+rte_event_ring_count(const struct rte_event_ring *r)
+{
+	return rte_ring_count(&r->hdr);
+}
+
+/**
+ * Return the number of free entries in a ring.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   The number of free entries in the ring.
+ */
+static inline unsigned
+rte_event_ring_free_count(const struct rte_event_ring *r)
+{
+	return rte_ring_free_count(&r->hdr);
+}
+
+/**
+ * Dump the status of all rings on the console
+ *
+ * @param f
+ *   A pointer to a file for output
+ */
+void rte_event_ring_list_dump(FILE *f);
+
+/**
+ * Search a ring from its name
+ *
+ * @param name
+ *   The name of the ring.
+ * @return
+ *   The pointer to the ring matching the name, or NULL if not found,
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ */
+struct rte_event_ring *rte_event_ring_lookup(const char *name);
+
+
+static inline __attribute__((always_inline)) unsigned int
+__rte_event_ring_do_enqueue(struct rte_event_ring *r,
+		const struct rte_event *obj_table, unsigned int n,
+		enum rte_ring_queue_behavior behavior, int is_sp,
+		unsigned int *free_space)
+{
+	uint32_t prod_head, prod_next;
+	uint32_t free_entries;
+
+	n = __rte_ring_move_prod_head(&r->hdr, is_sp, n, behavior,
+			&prod_head, &prod_next, &free_entries);
+	if (n == 0)
+		goto end;
+
+	ENQUEUE_PTRS(&r->hdr, obj_table, n, prod_head, struct rte_event);
+	rte_smp_wmb();
+
+	update_tail(&r->hdr.prod, prod_head, prod_next);
+end:
+	if (free_space != NULL)
+		*free_space = free_entries - n;
+	return n;
+}
+
+static inline __attribute__((always_inline)) unsigned int
+__rte_event_ring_do_dequeue(struct rte_event_ring *r,
+		struct rte_event *obj_table, unsigned int n,
+		enum rte_ring_queue_behavior behavior,
+		int is_sc, unsigned int *available)
+{
+	uint32_t cons_head, cons_next;
+	uint32_t entries;
+
+	n = __rte_ring_move_cons_head(&r->hdr, is_sc, n, behavior,
+			&cons_head, &cons_next, &entries);
+	if (n == 0)
+		goto end;
+
+	DEQUEUE_PTRS(&r->hdr, obj_table, n, cons_head, struct rte_event);
+	rte_smp_rmb();
+
+	update_tail(&r->hdr.cons, cons_head, cons_next);
+
+end:
+	if (available != NULL)
+		*available = entries - n;
+	return n;
+}
+
+/**
+ * Enqueue several objects on the ring (multi-producers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * producer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @param free_space
+ *   Returns the amount of free space in the ring after the enqueue call
+ * @return
+ *   - n: Actual number of objects enqueued.
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_mp_enqueue_burst(struct rte_event_ring *r,
+		const struct rte_event * const obj_table, unsigned int n,
+		unsigned int *free_space)
+{
+	return __rte_event_ring_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space);
+}
+
+/**
+ * Enqueue several objects on a ring (NOT multi-producers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @param free_space
+ *   Returns the amount of free space in the ring after the enqueue call
+ * @return
+ *   - n: Actual number of objects enqueued.
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_sp_enqueue_burst(struct rte_event_ring *r,
+		const struct rte_event * const obj_table, unsigned int n,
+		unsigned int *free_space)
+{
+	return __rte_event_ring_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space);
+}
+
+/**
+ * Enqueue several objects on a ring.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - n: Actual number of objects enqueued.
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_enqueue_burst(struct rte_event_ring *r,
+		const struct rte_event * const obj_table, unsigned int n,
+		unsigned int *free_space)
+{
+	return __rte_event_ring_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, r->hdr.prod.sp_enqueue,
+			free_space);
+}
+
+/**
+ * Enqueue all objects on a ring, or enqueue none.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - n: All objects enqueued.
+ *   - 0: No objects enqueued
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_enqueue_bulk(struct rte_event_ring *r,
+		const struct rte_event * const obj_table, unsigned int n,
+		unsigned int *free_space)
+{
+	return __rte_event_ring_do_enqueue(r, obj_table, n,
+			RTE_RING_QUEUE_FIXED, r->hdr.prod.sp_enqueue,
+			free_space);
+}
+
+/**
+ * Dequeue several objects from a ring (multi-consumers safe). When the request
+ * objects are more than the available objects, only dequeue the actual number
+ * of objects
+ *
+ * This function uses a "compare and set" instruction to move the
+ * consumer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - n: Actual number of objects dequeued, 0 if ring is empty
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_mc_dequeue_burst(struct rte_event_ring *r,
+		struct rte_event * const obj_table, unsigned int n,
+		unsigned int *available)
+{
+	return __rte_event_ring_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_MC, available);
+}
+
+/**
+ * Dequeue several objects from a ring (NOT multi-consumers safe).When the
+ * request objects are more than the available objects, only dequeue the
+ * actual number of objects
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - n: Actual number of objects dequeued, 0 if ring is empty
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_sc_dequeue_burst(struct rte_event_ring *r,
+		struct rte_event * const obj_table, unsigned int n,
+		unsigned int *available)
+{
+	return __rte_event_ring_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, __IS_SC, available);
+}
+
+/**
+ * Dequeue multiple objects from a ring up to a maximum number.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behaviour that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - Number of objects dequeued
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_dequeue_burst(struct rte_event_ring *r,
+		struct rte_event * const obj_table, unsigned int n,
+		unsigned int *available)
+{
+	return __rte_event_ring_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_VARIABLE, r->hdr.cons.sc_dequeue,
+			available);
+}
+
+/**
+ * Dequeue all requested objects from a ring, or dequeue none .
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behaviour that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - n: all objects dequeued
+ *   - 0: nothing dequeued
+ */
+static inline unsigned __attribute__((always_inline))
+rte_event_ring_dequeue_bulk(struct rte_event_ring *r,
+		struct rte_event * const obj_table, unsigned int n,
+		unsigned int *available)
+{
+	return __rte_event_ring_do_dequeue(r, obj_table, n,
+			RTE_RING_QUEUE_FIXED, r->hdr.cons.sc_dequeue,
+			available);
+}
+
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENT_RING_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH RFCv3 00/19] ring cleanup and generalization
  2017-02-07 14:12 ` [PATCH RFCv3 00/19] ring cleanup and generalization Bruce Richardson
@ 2017-02-14  8:32   ` Olivier Matz
  2017-02-14  9:39     ` Bruce Richardson
  0 siblings, 1 reply; 37+ messages in thread
From: Olivier Matz @ 2017-02-14  8:32 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev

Hi Bruce,

On Tue,  7 Feb 2017 14:12:38 +0000, Bruce Richardson
<bruce.richardson@intel.com> wrote:
> This patchset make a set of, sometimes non-backward compatible,
> cleanup changes to the rte_ring code in order to improve it. The
> resulting code is shorter*, since the existing functions are
> restructured to reduce code duplication, as well as being more
> consistent in behaviour. The specific changes made are explained in
> each patch which makes that change.
> 
> Key incompatibilities:
> * The biggest, and probably most controversial change is that to the
>   enqueue and dequeue APIs. The enqueue/deq burst and bulk functions
> have their function prototypes changed so that they all return an
> additional parameter, indicating the size of next call which is
> guaranteed to succeed. In case on enq, this is the number of
> available slots on the ring, and in case of deq, it is the number of
> objects which can be pulled. As well as this, the return value from
> the bulk functions have been changed to make them compatible with the
> burst functions. In all cases, the functions to enq/deq a set of objs
> now return the number of objects processed, 0 or N, in the case of
> bulk functions, 0, N or any value in between in the case of the burst
> ones. [Due to the extra parameter, the compiler will flag all
> instances of the function to allow the user to also change the return
> value logic at the same time]
> * The parameters to the single object enq/deq functions have not been 
>   changed. Because of that, the return value is also unmodified - as
> the compiler cannot automatically flag this to the user.
> 
> Potential further cleanups:
> * To a certain extent the rte_ring structure has gone from being a
> whole ring structure, including a "ring" element itself, to just
> being a header which can be reused, along with the head/tail update
> functions to create new rings. For now, the enqueue code works by
> assuming that the ring data goes immediately after the header, but
> that can be changed to allow specialised ring implementations to put
> additional metadata of their own after the ring header. I didn't see
> this as being needed right now, but it may be worth considering for a
> V1 patchset.
> * There are 9 enqueue functions and 9 dequeue functions in
> rte_ring.h. I suspect not all of those are used, so personally I
> would consider dropping the functions to enqueue/dequeue a single
> value using single or multi semantics, i.e. drop 
>     rte_ring_sp_enqueue
>     rte_ring_mp_enqueue
>     rte_ring_sc_dequeue
>     rte_ring_mc_dequeue
>   That would still leave a single enqueue and dequeue function for
> working with a single object at a time.
> * It should be possible to merge the head update code for enqueue and
>   dequeue into a single function. The key difference between the two
> is the calculation of how far the index can be moved. I felt that the
>   functions for moving the head index are sufficiently complicated
> with many parameters to them already, that trying to merge in more
> code would impede readability. However, if so desired this change can
> be made at a later stage without affecting ABI or API.
> 
> PERFORMANCE:
> I've run performance autotests on a couple of (Intel) platforms.
> Looking particularly at the core-2-core results, which I expect are
> the main ones of interest, the performance after this patchset is a
> few cycles per packet faster in my testing. I'm hoping it should be
> at least neutral perf-wise.
> 
> REQUEST FOR FEEDBACK:
> * Are all of these changes worth making?

I've quickly browsed all the patches. I think yes, we should do it: it
brings a good cleanup, removing features we don't need, restructuring
the code, and also adding the feature you need :)


> * Should they be made in existing ring code, or do we look to provide
> a new fifo library to completely replace the ring one?

I think it's ok to have it in the existing code. Breaking the ABI
is never suitable, but I think having 2 libs would be even more
confusing.


> * How does the implementation of new ring types using this code
> compare vs that of the previous RFCs?

I prefer this version, especially compared to the first RFC.


Thanks for this big rework. I'll dive into the patches a do a more
exhaustive review soon.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFCv3 00/19] ring cleanup and generalization
  2017-02-14  8:32   ` Olivier Matz
@ 2017-02-14  9:39     ` Bruce Richardson
  0 siblings, 0 replies; 37+ messages in thread
From: Bruce Richardson @ 2017-02-14  9:39 UTC (permalink / raw)
  To: Olivier Matz
  Cc: thomas.monjalon, keith.wiles, konstantin.ananyev, stephen, dev

On Tue, Feb 14, 2017 at 09:32:20AM +0100, Olivier Matz wrote:
> Hi Bruce,
> 
> On Tue,  7 Feb 2017 14:12:38 +0000, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > This patchset make a set of, sometimes non-backward compatible,
> > cleanup changes to the rte_ring code in order to improve it. The
> > resulting code is shorter*, since the existing functions are
> > restructured to reduce code duplication, as well as being more
> > consistent in behaviour. The specific changes made are explained in
> > each patch which makes that change.
> > 
> > Key incompatibilities:
> > * The biggest, and probably most controversial change is that to the
> >   enqueue and dequeue APIs. The enqueue/deq burst and bulk functions
> > have their function prototypes changed so that they all return an
> > additional parameter, indicating the size of next call which is
> > guaranteed to succeed. In case on enq, this is the number of
> > available slots on the ring, and in case of deq, it is the number of
> > objects which can be pulled. As well as this, the return value from
> > the bulk functions have been changed to make them compatible with the
> > burst functions. In all cases, the functions to enq/deq a set of objs
> > now return the number of objects processed, 0 or N, in the case of
> > bulk functions, 0, N or any value in between in the case of the burst
> > ones. [Due to the extra parameter, the compiler will flag all
> > instances of the function to allow the user to also change the return
> > value logic at the same time]
> > * The parameters to the single object enq/deq functions have not been 
> >   changed. Because of that, the return value is also unmodified - as
> > the compiler cannot automatically flag this to the user.
> > 
> > Potential further cleanups:
> > * To a certain extent the rte_ring structure has gone from being a
> > whole ring structure, including a "ring" element itself, to just
> > being a header which can be reused, along with the head/tail update
> > functions to create new rings. For now, the enqueue code works by
> > assuming that the ring data goes immediately after the header, but
> > that can be changed to allow specialised ring implementations to put
> > additional metadata of their own after the ring header. I didn't see
> > this as being needed right now, but it may be worth considering for a
> > V1 patchset.
> > * There are 9 enqueue functions and 9 dequeue functions in
> > rte_ring.h. I suspect not all of those are used, so personally I
> > would consider dropping the functions to enqueue/dequeue a single
> > value using single or multi semantics, i.e. drop 
> >     rte_ring_sp_enqueue
> >     rte_ring_mp_enqueue
> >     rte_ring_sc_dequeue
> >     rte_ring_mc_dequeue
> >   That would still leave a single enqueue and dequeue function for
> > working with a single object at a time.
> > * It should be possible to merge the head update code for enqueue and
> >   dequeue into a single function. The key difference between the two
> > is the calculation of how far the index can be moved. I felt that the
> >   functions for moving the head index are sufficiently complicated
> > with many parameters to them already, that trying to merge in more
> > code would impede readability. However, if so desired this change can
> > be made at a later stage without affecting ABI or API.
> > 
> > PERFORMANCE:
> > I've run performance autotests on a couple of (Intel) platforms.
> > Looking particularly at the core-2-core results, which I expect are
> > the main ones of interest, the performance after this patchset is a
> > few cycles per packet faster in my testing. I'm hoping it should be
> > at least neutral perf-wise.
> > 
> > REQUEST FOR FEEDBACK:
> > * Are all of these changes worth making?
> 
> I've quickly browsed all the patches. I think yes, we should do it: it
> brings a good cleanup, removing features we don't need, restructuring
> the code, and also adding the feature you need :)
> 
> 
> > * Should they be made in existing ring code, or do we look to provide
> > a new fifo library to completely replace the ring one?
> 
> I think it's ok to have it in the existing code. Breaking the ABI
> is never suitable, but I think having 2 libs would be even more
> confusing.
> 
> 
> > * How does the implementation of new ring types using this code
> > compare vs that of the previous RFCs?
> 
> I prefer this version, especially compared to the first RFC.
> 
> 
> Thanks for this big rework. I'll dive into the patches a do a more
> exhaustive review soon.
> 
Great, thanks. I'm aware of a few things that already need to be cleaned
up for V1 e.g. comments are not always correctly updated on functions.

/Bruce

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2017-02-14  9:39 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-25 12:14 rte_ring features in use (or not) Bruce Richardson
2017-01-25 12:16 ` Bruce Richardson
2017-01-25 13:20 ` Olivier MATZ
2017-01-25 13:54   ` Bruce Richardson
2017-01-25 14:48     ` Bruce Richardson
2017-01-25 15:59       ` Wiles, Keith
2017-01-25 16:57         ` Bruce Richardson
2017-01-25 17:29           ` Ananyev, Konstantin
2017-01-31 10:53             ` Olivier Matz
2017-01-31 11:41               ` Bruce Richardson
2017-01-31 12:10                 ` Bruce Richardson
2017-01-31 13:27                   ` Olivier Matz
2017-01-31 13:46                     ` Bruce Richardson
2017-01-25 22:27           ` Wiles, Keith
2017-01-25 16:39   ` Stephen Hemminger
2017-02-07 14:12 ` [PATCH RFCv3 00/19] ring cleanup and generalization Bruce Richardson
2017-02-14  8:32   ` Olivier Matz
2017-02-14  9:39     ` Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 01/19] app/pdump: fix duplicate macro definition Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 02/19] ring: remove split cacheline build setting Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 03/19] ring: create common structure for prod and cons metadata Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 04/19] ring: add a function to return the ring size Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 05/19] crypto/null: use ring size function Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 06/19] ring: eliminate duplication of size and mask fields Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 07/19] ring: remove debug setting Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 08/19] ring: remove the yield when waiting for tail update Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 09/19] ring: remove watermark support Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 10/19] ring: make bulk and burst fn return vals consistent Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 11/19] ring: allow enq fns to return free space value Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 12/19] examples/quota_watermark: use ring space for watermarks Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 13/19] ring: allow dequeue fns to return remaining entry count Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 14/19] ring: reduce scope of local variables Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 15/19] ring: separate out head index manipulation for enq/deq Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 16/19] ring: create common function for updating tail idx Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 17/19] ring: allow macros to work with any type of object Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 18/19] ring: add object size parameter to memory size calculation Bruce Richardson
2017-02-07 14:12 ` [PATCH RFCv3 19/19] ring: add event ring implementation Bruce Richardson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.