All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mempool: micro-optimize put function
@ 2022-11-16 10:18 Morten Brørup
  2022-11-16 11:04 ` Andrew Rybchenko
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Morten Brørup @ 2022-11-16 10:18 UTC (permalink / raw)
  To: olivier.matz, andrew.rybchenko, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev,
	Morten Brørup

Micro-optimization:
Reduced the most likely code path in the generic put function by moving an
unlikely check out of the most likely code path and further down.

Also updated the comments in the function.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 9f530db24b..aba90dbb5b 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
 {
 	void **cache_objs;
 
-	/* No cache provided */
+	/* No cache provided? */
 	if (unlikely(cache == NULL))
 		goto driver_enqueue;
 
-	/* increment stat now, adding in mempool always success */
+	/* Increment stats now, adding in mempool always succeeds. */
 	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
 	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
 
-	/* The request itself is too big for the cache */
-	if (unlikely(n > cache->flushthresh))
-		goto driver_enqueue_stats_incremented;
-
-	/*
-	 * The cache follows the following algorithm:
-	 *   1. If the objects cannot be added to the cache without crossing
-	 *      the flush threshold, flush the cache to the backend.
-	 *   2. Add the objects to the cache.
-	 */
-
-	if (cache->len + n <= cache->flushthresh) {
+	if (likely(cache->len + n <= cache->flushthresh)) {
+		/*
+		 * The objects can be added to the cache without crossing the
+		 * flush threshold.
+		 */
 		cache_objs = &cache->objs[cache->len];
 		cache->len += n;
-	} else {
+	} else if (likely(n <= cache->flushthresh)) {
+		/*
+		 * The request itself fits into the cache.
+		 * But first, the cache must be flushed to the backend, so
+		 * adding the objects does not cross the flush threshold.
+		 */
 		cache_objs = &cache->objs[0];
 		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
 		cache->len = n;
+	} else {
+		/* The request itself is too big for the cache. */
+		goto driver_enqueue_stats_incremented;
 	}
 
 	/* Add the objects to the cache. */
@@ -1399,13 +1400,13 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
 
 driver_enqueue:
 
-	/* increment stat now, adding in mempool always success */
+	/* Increment stats now, adding in mempool always succeeds. */
 	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
 	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
 
 driver_enqueue_stats_incremented:
 
-	/* push objects to the backend */
+	/* Push the objects to the backend. */
 	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH] mempool: micro-optimize put function
  2022-11-16 10:18 [PATCH] mempool: micro-optimize put function Morten Brørup
@ 2022-11-16 11:04 ` Andrew Rybchenko
  2022-11-16 11:10   ` Morten Brørup
  2022-11-16 12:14 ` [PATCH v2] " Morten Brørup
  2022-12-24 10:46 ` [PATCH v3] " Morten Brørup
  2 siblings, 1 reply; 16+ messages in thread
From: Andrew Rybchenko @ 2022-11-16 11:04 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev

On 11/16/22 13:18, Morten Brørup wrote:
> Micro-optimization:
> Reduced the most likely code path in the generic put function by moving an
> unlikely check out of the most likely code path and further down.
> 
> Also updated the comments in the function.
> 
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> ---
>   lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
>   1 file changed, 18 insertions(+), 17 deletions(-)
> 
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 9f530db24b..aba90dbb5b 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
>   {
>   	void **cache_objs;
>   
> -	/* No cache provided */
> +	/* No cache provided? */
>   	if (unlikely(cache == NULL))
>   		goto driver_enqueue;
>   
> -	/* increment stat now, adding in mempool always success */
> +	/* Increment stats now, adding in mempool always succeeds. */
>   	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
>   	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
>   
> -	/* The request itself is too big for the cache */
> -	if (unlikely(n > cache->flushthresh))
> -		goto driver_enqueue_stats_incremented;

I've kept the check here since it protects against overflow in len plus 
n below if n is really huge.

> -
> -	/*
> -	 * The cache follows the following algorithm:
> -	 *   1. If the objects cannot be added to the cache without crossing
> -	 *      the flush threshold, flush the cache to the backend.
> -	 *   2. Add the objects to the cache.
> -	 */
> -
> -	if (cache->len + n <= cache->flushthresh) {
> +	if (likely(cache->len + n <= cache->flushthresh)) {
> +		/*
> +		 * The objects can be added to the cache without crossing the
> +		 * flush threshold.
> +		 */
>   		cache_objs = &cache->objs[cache->len];
>   		cache->len += n;
> -	} else {
> +	} else if (likely(n <= cache->flushthresh)) {
> +		/*
> +		 * The request itself fits into the cache.
> +		 * But first, the cache must be flushed to the backend, so
> +		 * adding the objects does not cross the flush threshold.
> +		 */
>   		cache_objs = &cache->objs[0];
>   		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
>   		cache->len = n;
> +	} else {
> +		/* The request itself is too big for the cache. */
> +		goto driver_enqueue_stats_incremented;
>   	}
>   
>   	/* Add the objects to the cache. */
> @@ -1399,13 +1400,13 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
>   
>   driver_enqueue:
>   
> -	/* increment stat now, adding in mempool always success */
> +	/* Increment stats now, adding in mempool always succeeds. */
>   	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
>   	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
>   
>   driver_enqueue_stats_incremented:
>   
> -	/* push objects to the backend */
> +	/* Push the objects to the backend. */
>   	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
>   }
>   


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH] mempool: micro-optimize put function
  2022-11-16 11:04 ` Andrew Rybchenko
@ 2022-11-16 11:10   ` Morten Brørup
  2022-11-16 11:29     ` Andrew Rybchenko
  0 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-11-16 11:10 UTC (permalink / raw)
  To: Andrew Rybchenko, olivier.matz, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev

> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
> Sent: Wednesday, 16 November 2022 12.05
> 
> On 11/16/22 13:18, Morten Brørup wrote:
> > Micro-optimization:
> > Reduced the most likely code path in the generic put function by
> moving an
> > unlikely check out of the most likely code path and further down.
> >
> > Also updated the comments in the function.
> >
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
> >   lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
> >   1 file changed, 18 insertions(+), 17 deletions(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 9f530db24b..aba90dbb5b 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
> >   {
> >   	void **cache_objs;
> >
> > -	/* No cache provided */
> > +	/* No cache provided? */
> >   	if (unlikely(cache == NULL))
> >   		goto driver_enqueue;
> >
> > -	/* increment stat now, adding in mempool always success */
> > +	/* Increment stats now, adding in mempool always succeeds. */
> >   	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> >   	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> >
> > -	/* The request itself is too big for the cache */
> > -	if (unlikely(n > cache->flushthresh))
> > -		goto driver_enqueue_stats_incremented;
> 
> I've kept the check here since it protects against overflow in len plus
> n below if n is really huge.

We can fix that, see below.

> 
> > -
> > -	/*
> > -	 * The cache follows the following algorithm:
> > -	 *   1. If the objects cannot be added to the cache without
> crossing
> > -	 *      the flush threshold, flush the cache to the backend.
> > -	 *   2. Add the objects to the cache.
> > -	 */
> > -
> > -	if (cache->len + n <= cache->flushthresh) {
> > +	if (likely(cache->len + n <= cache->flushthresh)) {

It is an invariant that cache->len <= cache->flushthresh, so the above comparison can be rewritten to protect against overflow:

if (likely(n <= cache->flushthresh - cache->len)) {

> > +		/*
> > +		 * The objects can be added to the cache without crossing
> the
> > +		 * flush threshold.
> > +		 */
> >   		cache_objs = &cache->objs[cache->len];
> >   		cache->len += n;
> > -	} else {
> > +	} else if (likely(n <= cache->flushthresh)) {
> > +		/*
> > +		 * The request itself fits into the cache.
> > +		 * But first, the cache must be flushed to the backend, so
> > +		 * adding the objects does not cross the flush threshold.
> > +		 */
> >   		cache_objs = &cache->objs[0];
> >   		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
> >   		cache->len = n;
> > +	} else {
> > +		/* The request itself is too big for the cache. */
> > +		goto driver_enqueue_stats_incremented;
> >   	}
> >
> >   	/* Add the objects to the cache. */
> > @@ -1399,13 +1400,13 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
> >
> >   driver_enqueue:
> >
> > -	/* increment stat now, adding in mempool always success */
> > +	/* Increment stats now, adding in mempool always succeeds. */
> >   	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> >   	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> >
> >   driver_enqueue_stats_incremented:
> >
> > -	/* push objects to the backend */
> > +	/* Push the objects to the backend. */
> >   	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
> >   }
> >
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] mempool: micro-optimize put function
  2022-11-16 11:10   ` Morten Brørup
@ 2022-11-16 11:29     ` Andrew Rybchenko
  0 siblings, 0 replies; 16+ messages in thread
From: Andrew Rybchenko @ 2022-11-16 11:29 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev

On 11/16/22 14:10, Morten Brørup wrote:
>> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
>> Sent: Wednesday, 16 November 2022 12.05
>>
>> On 11/16/22 13:18, Morten Brørup wrote:
>>> Micro-optimization:
>>> Reduced the most likely code path in the generic put function by
>> moving an
>>> unlikely check out of the most likely code path and further down.
>>>
>>> Also updated the comments in the function.
>>>
>>> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
>>> ---
>>>    lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
>>>    1 file changed, 18 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
>>> index 9f530db24b..aba90dbb5b 100644
>>> --- a/lib/mempool/rte_mempool.h
>>> +++ b/lib/mempool/rte_mempool.h
>>> @@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool
>> *mp, void * const *obj_table,
>>>    {
>>>    	void **cache_objs;
>>>
>>> -	/* No cache provided */
>>> +	/* No cache provided? */
>>>    	if (unlikely(cache == NULL))
>>>    		goto driver_enqueue;
>>>
>>> -	/* increment stat now, adding in mempool always success */
>>> +	/* Increment stats now, adding in mempool always succeeds. */
>>>    	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
>>>    	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
>>>
>>> -	/* The request itself is too big for the cache */
>>> -	if (unlikely(n > cache->flushthresh))
>>> -		goto driver_enqueue_stats_incremented;
>>
>> I've kept the check here since it protects against overflow in len plus
>> n below if n is really huge.
> 
> We can fix that, see below.
> 
>>
>>> -
>>> -	/*
>>> -	 * The cache follows the following algorithm:
>>> -	 *   1. If the objects cannot be added to the cache without
>> crossing
>>> -	 *      the flush threshold, flush the cache to the backend.
>>> -	 *   2. Add the objects to the cache.
>>> -	 */
>>> -
>>> -	if (cache->len + n <= cache->flushthresh) {
>>> +	if (likely(cache->len + n <= cache->flushthresh)) {
> 
> It is an invariant that cache->len <= cache->flushthresh, so the above comparison can be rewritten to protect against overflow:
> 
> if (likely(n <= cache->flushthresh - cache->len)) {
> 

True, but it would be useful to highlight the usage of the
invariant here using either a comment or an assert.

IMHO it is wrong to use likely here since, as far as I know, it makes 
else branch very expensive, but crossing the flush
threshold is an expected branch and it must not be that
expensive.

>>> +		/*
>>> +		 * The objects can be added to the cache without crossing
>> the
>>> +		 * flush threshold.
>>> +		 */
>>>    		cache_objs = &cache->objs[cache->len];
>>>    		cache->len += n;
>>> -	} else {
>>> +	} else if (likely(n <= cache->flushthresh)) {
>>> +		/*
>>> +		 * The request itself fits into the cache.
>>> +		 * But first, the cache must be flushed to the backend, so
>>> +		 * adding the objects does not cross the flush threshold.
>>> +		 */
>>>    		cache_objs = &cache->objs[0];
>>>    		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
>>>    		cache->len = n;
>>> +	} else {
>>> +		/* The request itself is too big for the cache. */
>>> +		goto driver_enqueue_stats_incremented;
>>>    	}
>>>
>>>    	/* Add the objects to the cache. */
>>> @@ -1399,13 +1400,13 @@ rte_mempool_do_generic_put(struct rte_mempool
>> *mp, void * const *obj_table,
>>>
>>>    driver_enqueue:
>>>
>>> -	/* increment stat now, adding in mempool always success */
>>> +	/* Increment stats now, adding in mempool always succeeds. */
>>>    	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
>>>    	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
>>>
>>>    driver_enqueue_stats_incremented:
>>>
>>> -	/* push objects to the backend */
>>> +	/* Push the objects to the backend. */
>>>    	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
>>>    }
>>>
>>
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2] mempool: micro-optimize put function
  2022-11-16 10:18 [PATCH] mempool: micro-optimize put function Morten Brørup
  2022-11-16 11:04 ` Andrew Rybchenko
@ 2022-11-16 12:14 ` Morten Brørup
  2022-11-16 15:51   ` Honnappa Nagarahalli
  2022-12-24 10:46 ` [PATCH v3] " Morten Brørup
  2 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-11-16 12:14 UTC (permalink / raw)
  To: olivier.matz, andrew.rybchenko, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev,
	Morten Brørup

Micro-optimization:
Reduced the most likely code path in the generic put function by moving an
unlikely check out of the most likely code path and further down.

Also updated the comments in the function.

v2 (feedback from Andrew Rybchenko):
* Modified comparison to prevent overflow if n is really huge and len is
  non-zero.
* Added assertion about the invariant preventing overflow in the
  comparison.
* Crossing the threshold is not extremely unlikely, so removed likely()
  from that comparison.
  The compiler will generate code with optimal static branch prediction
  here anyway.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 9f530db24b..dd1a3177d6 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
 {
 	void **cache_objs;
 
-	/* No cache provided */
+	/* No cache provided? */
 	if (unlikely(cache == NULL))
 		goto driver_enqueue;
 
-	/* increment stat now, adding in mempool always success */
+	/* Increment stats now, adding in mempool always succeeds. */
 	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
 	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
 
-	/* The request itself is too big for the cache */
-	if (unlikely(n > cache->flushthresh))
-		goto driver_enqueue_stats_incremented;
-
-	/*
-	 * The cache follows the following algorithm:
-	 *   1. If the objects cannot be added to the cache without crossing
-	 *      the flush threshold, flush the cache to the backend.
-	 *   2. Add the objects to the cache.
-	 */
+	/* Assert the invariant preventing overflow in the comparison below. */
+	RTE_ASSERT(cache->len <= cache->flushthresh);
 
-	if (cache->len + n <= cache->flushthresh) {
+	if (n <= cache->flushthresh - cache->len) {
+		/*
+		 * The objects can be added to the cache without crossing the
+		 * flush threshold.
+		 */
 		cache_objs = &cache->objs[cache->len];
 		cache->len += n;
-	} else {
+	} else if (likely(n <= cache->flushthresh)) {
+		/*
+		 * The request itself fits into the cache.
+		 * But first, the cache must be flushed to the backend, so
+		 * adding the objects does not cross the flush threshold.
+		 */
 		cache_objs = &cache->objs[0];
 		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
 		cache->len = n;
+	} else {
+		/* The request itself is too big for the cache. */
+		goto driver_enqueue_stats_incremented;
 	}
 
 	/* Add the objects to the cache. */
@@ -1399,13 +1403,13 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
 
 driver_enqueue:
 
-	/* increment stat now, adding in mempool always success */
+	/* Increment stats now, adding in mempool always succeeds. */
 	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
 	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
 
 driver_enqueue_stats_incremented:
 
-	/* push objects to the backend */
+	/* Push the objects to the backend. */
 	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-11-16 12:14 ` [PATCH v2] " Morten Brørup
@ 2022-11-16 15:51   ` Honnappa Nagarahalli
  2022-11-16 15:59     ` Morten Brørup
  0 siblings, 1 reply; 16+ messages in thread
From: Honnappa Nagarahalli @ 2022-11-16 15:51 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, andrew.rybchenko, dev
  Cc: bruce.richardson, konstantin.ananyev, nd, nd

<snip>
> 
> Micro-optimization:
> Reduced the most likely code path in the generic put function by moving an
> unlikely check out of the most likely code path and further down.
> 
> Also updated the comments in the function.
> 
> v2 (feedback from Andrew Rybchenko):
> * Modified comparison to prevent overflow if n is really huge and len is
>   non-zero.
> * Added assertion about the invariant preventing overflow in the
>   comparison.
> * Crossing the threshold is not extremely unlikely, so removed likely()
>   from that comparison.
>   The compiler will generate code with optimal static branch prediction
>   here anyway.
> 
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> ---
>  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
>  1 file changed, 20 insertions(+), 16 deletions(-)
> 
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 9f530db24b..dd1a3177d6 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> rte_mempool *mp, void * const *obj_table,  {
>  	void **cache_objs;
> 
> -	/* No cache provided */
> +	/* No cache provided? */
>  	if (unlikely(cache == NULL))
>  		goto driver_enqueue;
> 
> -	/* increment stat now, adding in mempool always success */
> +	/* Increment stats now, adding in mempool always succeeds. */
>  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
>  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> 
> -	/* The request itself is too big for the cache */
> -	if (unlikely(n > cache->flushthresh))
> -		goto driver_enqueue_stats_incremented;
> -
> -	/*
> -	 * The cache follows the following algorithm:
> -	 *   1. If the objects cannot be added to the cache without crossing
> -	 *      the flush threshold, flush the cache to the backend.
> -	 *   2. Add the objects to the cache.
> -	 */
> +	/* Assert the invariant preventing overflow in the comparison below.
> */
> +	RTE_ASSERT(cache->len <= cache->flushthresh);
> 
> -	if (cache->len + n <= cache->flushthresh) {
> +	if (n <= cache->flushthresh - cache->len) {
> +		/*
> +		 * The objects can be added to the cache without crossing the
> +		 * flush threshold.
> +		 */
>  		cache_objs = &cache->objs[cache->len];
>  		cache->len += n;
> -	} else {
> +	} else if (likely(n <= cache->flushthresh)) {
IMO, this is a misconfiguration on the application part. In the PMDs I have looked at, max value of 'n' is controlled by compile time constants. Application could do a compile time check on the cache threshold or we could have another RTE_ASSERT on this.

> +		/*
> +		 * The request itself fits into the cache.
> +		 * But first, the cache must be flushed to the backend, so
> +		 * adding the objects does not cross the flush threshold.
> +		 */
>  		cache_objs = &cache->objs[0];
>  		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> >len);
>  		cache->len = n;
> +	} else {
> +		/* The request itself is too big for the cache. */
> +		goto driver_enqueue_stats_incremented;
>  	}
> 
>  	/* Add the objects to the cache. */
> @@ -1399,13 +1403,13 @@ rte_mempool_do_generic_put(struct
> rte_mempool *mp, void * const *obj_table,
> 
>  driver_enqueue:
> 
> -	/* increment stat now, adding in mempool always success */
> +	/* Increment stats now, adding in mempool always succeeds. */
>  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
>  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> 
>  driver_enqueue_stats_incremented:
> 
> -	/* push objects to the backend */
> +	/* Push the objects to the backend. */
>  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-11-16 15:51   ` Honnappa Nagarahalli
@ 2022-11-16 15:59     ` Morten Brørup
  2022-11-16 16:26       ` Honnappa Nagarahalli
  0 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-11-16 15:59 UTC (permalink / raw)
  To: Honnappa Nagarahalli, olivier.matz, andrew.rybchenko, dev
  Cc: bruce.richardson, konstantin.ananyev, nd, nd

> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Wednesday, 16 November 2022 16.51
> 
> <snip>
> >
> > Micro-optimization:
> > Reduced the most likely code path in the generic put function by
> moving an
> > unlikely check out of the most likely code path and further down.
> >
> > Also updated the comments in the function.
> >
> > v2 (feedback from Andrew Rybchenko):
> > * Modified comparison to prevent overflow if n is really huge and len
> is
> >   non-zero.
> > * Added assertion about the invariant preventing overflow in the
> >   comparison.
> > * Crossing the threshold is not extremely unlikely, so removed
> likely()
> >   from that comparison.
> >   The compiler will generate code with optimal static branch
> prediction
> >   here anyway.
> >
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
> >  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
> >  1 file changed, 20 insertions(+), 16 deletions(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 9f530db24b..dd1a3177d6 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> > rte_mempool *mp, void * const *obj_table,  {
> >  	void **cache_objs;
> >
> > -	/* No cache provided */
> > +	/* No cache provided? */
> >  	if (unlikely(cache == NULL))
> >  		goto driver_enqueue;
> >
> > -	/* increment stat now, adding in mempool always success */
> > +	/* Increment stats now, adding in mempool always succeeds. */
> >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> >
> > -	/* The request itself is too big for the cache */
> > -	if (unlikely(n > cache->flushthresh))
> > -		goto driver_enqueue_stats_incremented;
> > -
> > -	/*
> > -	 * The cache follows the following algorithm:
> > -	 *   1. If the objects cannot be added to the cache without
> crossing
> > -	 *      the flush threshold, flush the cache to the backend.
> > -	 *   2. Add the objects to the cache.
> > -	 */
> > +	/* Assert the invariant preventing overflow in the comparison
> below.
> > */
> > +	RTE_ASSERT(cache->len <= cache->flushthresh);
> >
> > -	if (cache->len + n <= cache->flushthresh) {
> > +	if (n <= cache->flushthresh - cache->len) {
> > +		/*
> > +		 * The objects can be added to the cache without crossing
> the
> > +		 * flush threshold.
> > +		 */
> >  		cache_objs = &cache->objs[cache->len];
> >  		cache->len += n;
> > -	} else {
> > +	} else if (likely(n <= cache->flushthresh)) {
> IMO, this is a misconfiguration on the application part. In the PMDs I
> have looked at, max value of 'n' is controlled by compile time
> constants. Application could do a compile time check on the cache
> threshold or we could have another RTE_ASSERT on this.

There could be applications using a mempool for something else than mbufs.

In that case, the application should be allowed to get/put many objects in one transaction.

> 
> > +		/*
> > +		 * The request itself fits into the cache.
> > +		 * But first, the cache must be flushed to the backend, so
> > +		 * adding the objects does not cross the flush threshold.
> > +		 */
> >  		cache_objs = &cache->objs[0];
> >  		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > >len);
> >  		cache->len = n;
> > +	} else {
> > +		/* The request itself is too big for the cache. */
> > +		goto driver_enqueue_stats_incremented;
> >  	}
> >
> >  	/* Add the objects to the cache. */
> > @@ -1399,13 +1403,13 @@ rte_mempool_do_generic_put(struct
> > rte_mempool *mp, void * const *obj_table,
> >
> >  driver_enqueue:
> >
> > -	/* increment stat now, adding in mempool always success */
> > +	/* Increment stats now, adding in mempool always succeeds. */
> >  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> >  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> >
> >  driver_enqueue_stats_incremented:
> >
> > -	/* push objects to the backend */
> > +	/* Push the objects to the backend. */
> >  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> >
> > --
> > 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-11-16 15:59     ` Morten Brørup
@ 2022-11-16 16:26       ` Honnappa Nagarahalli
  2022-11-16 17:39         ` Morten Brørup
  0 siblings, 1 reply; 16+ messages in thread
From: Honnappa Nagarahalli @ 2022-11-16 16:26 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, andrew.rybchenko, dev
  Cc: bruce.richardson, konstantin.ananyev, nd, nd

<snip>

> > >
> > > Micro-optimization:
> > > Reduced the most likely code path in the generic put function by
> > moving an
> > > unlikely check out of the most likely code path and further down.
> > >
> > > Also updated the comments in the function.
> > >
> > > v2 (feedback from Andrew Rybchenko):
> > > * Modified comparison to prevent overflow if n is really huge and
> > > len
> > is
> > >   non-zero.
> > > * Added assertion about the invariant preventing overflow in the
> > >   comparison.
> > > * Crossing the threshold is not extremely unlikely, so removed
> > likely()
> > >   from that comparison.
> > >   The compiler will generate code with optimal static branch
> > prediction
> > >   here anyway.
> > >
> > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > ---
> > >  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
> > >  1 file changed, 20 insertions(+), 16 deletions(-)
> > >
> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > > index 9f530db24b..dd1a3177d6 100644
> > > --- a/lib/mempool/rte_mempool.h
> > > +++ b/lib/mempool/rte_mempool.h
> > > @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> > > rte_mempool *mp, void * const *obj_table,  {
> > >  	void **cache_objs;
> > >
> > > -	/* No cache provided */
> > > +	/* No cache provided? */
> > >  	if (unlikely(cache == NULL))
> > >  		goto driver_enqueue;
> > >
> > > -	/* increment stat now, adding in mempool always success */
> > > +	/* Increment stats now, adding in mempool always succeeds. */
> > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > >
> > > -	/* The request itself is too big for the cache */
> > > -	if (unlikely(n > cache->flushthresh))
> > > -		goto driver_enqueue_stats_incremented;
> > > -
> > > -	/*
> > > -	 * The cache follows the following algorithm:
> > > -	 *   1. If the objects cannot be added to the cache without
> > crossing
> > > -	 *      the flush threshold, flush the cache to the backend.
> > > -	 *   2. Add the objects to the cache.
> > > -	 */
> > > +	/* Assert the invariant preventing overflow in the comparison
> > below.
> > > */
> > > +	RTE_ASSERT(cache->len <= cache->flushthresh);
> > >
> > > -	if (cache->len + n <= cache->flushthresh) {
> > > +	if (n <= cache->flushthresh - cache->len) {
> > > +		/*
> > > +		 * The objects can be added to the cache without crossing
> > the
> > > +		 * flush threshold.
> > > +		 */
> > >  		cache_objs = &cache->objs[cache->len];
> > >  		cache->len += n;
> > > -	} else {
> > > +	} else if (likely(n <= cache->flushthresh)) {
> > IMO, this is a misconfiguration on the application part. In the PMDs I
> > have looked at, max value of 'n' is controlled by compile time
> > constants. Application could do a compile time check on the cache
> > threshold or we could have another RTE_ASSERT on this.
> 
> There could be applications using a mempool for something else than mbufs.
Agree

> 
> In that case, the application should be allowed to get/put many objects in
> one transaction.
Still, this is a misconfiguration on the application. On one hand the threshold is configured for 'x' but they are sending a request which is more than 'x'. It should be possible to change the threshold configuration or reduce the request size.

If the application does not fix the misconfiguration, it is possible that it will always hit this case and does not get the benefit of using the per-core cache.

With this check, we are introducing an additional memcpy as well. I am not sure if reusing the latest buffers is better than having an memcpy.

> 
> >
> > > +		/*
> > > +		 * The request itself fits into the cache.
> > > +		 * But first, the cache must be flushed to the backend, so
> > > +		 * adding the objects does not cross the flush threshold.
> > > +		 */
> > >  		cache_objs = &cache->objs[0];
> > >  		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > > >len);
> > >  		cache->len = n;
> > > +	} else {
> > > +		/* The request itself is too big for the cache. */
> > > +		goto driver_enqueue_stats_incremented;
> > >  	}
> > >
> > >  	/* Add the objects to the cache. */ @@ -1399,13 +1403,13 @@
> > > rte_mempool_do_generic_put(struct rte_mempool *mp, void * const
> > > *obj_table,
> > >
> > >  driver_enqueue:
> > >
> > > -	/* increment stat now, adding in mempool always success */
> > > +	/* Increment stats now, adding in mempool always succeeds. */
> > >  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > >  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > >
> > >  driver_enqueue_stats_incremented:
> > >
> > > -	/* push objects to the backend */
> > > +	/* Push the objects to the backend. */
> > >  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> > >
> > > --
> > > 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-11-16 16:26       ` Honnappa Nagarahalli
@ 2022-11-16 17:39         ` Morten Brørup
  2022-12-19  8:50           ` Morten Brørup
  0 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-11-16 17:39 UTC (permalink / raw)
  To: Honnappa Nagarahalli, olivier.matz, andrew.rybchenko, dev
  Cc: bruce.richardson, konstantin.ananyev, nd, nd

> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Wednesday, 16 November 2022 17.27
> 
> <snip>
> 
> > > >
> > > > Micro-optimization:
> > > > Reduced the most likely code path in the generic put function by
> > > moving an
> > > > unlikely check out of the most likely code path and further down.
> > > >
> > > > Also updated the comments in the function.
> > > >
> > > > v2 (feedback from Andrew Rybchenko):
> > > > * Modified comparison to prevent overflow if n is really huge and
> > > > len
> > > is
> > > >   non-zero.
> > > > * Added assertion about the invariant preventing overflow in the
> > > >   comparison.
> > > > * Crossing the threshold is not extremely unlikely, so removed
> > > likely()
> > > >   from that comparison.
> > > >   The compiler will generate code with optimal static branch
> > > prediction
> > > >   here anyway.
> > > >
> > > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > > ---
> > > >  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++-------------
> ---
> > > >  1 file changed, 20 insertions(+), 16 deletions(-)
> > > >
> > > > diff --git a/lib/mempool/rte_mempool.h
> b/lib/mempool/rte_mempool.h
> > > > index 9f530db24b..dd1a3177d6 100644
> > > > --- a/lib/mempool/rte_mempool.h
> > > > +++ b/lib/mempool/rte_mempool.h
> > > > @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> > > > rte_mempool *mp, void * const *obj_table,  {
> > > >  	void **cache_objs;
> > > >
> > > > -	/* No cache provided */
> > > > +	/* No cache provided? */
> > > >  	if (unlikely(cache == NULL))
> > > >  		goto driver_enqueue;
> > > >
> > > > -	/* increment stat now, adding in mempool always success */
> > > > +	/* Increment stats now, adding in mempool always succeeds.
> */
> > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > >
> > > > -	/* The request itself is too big for the cache */
> > > > -	if (unlikely(n > cache->flushthresh))
> > > > -		goto driver_enqueue_stats_incremented;
> > > > -
> > > > -	/*
> > > > -	 * The cache follows the following algorithm:
> > > > -	 *   1. If the objects cannot be added to the cache without
> > > crossing
> > > > -	 *      the flush threshold, flush the cache to the
> backend.
> > > > -	 *   2. Add the objects to the cache.
> > > > -	 */
> > > > +	/* Assert the invariant preventing overflow in the
> comparison
> > > below.
> > > > */
> > > > +	RTE_ASSERT(cache->len <= cache->flushthresh);
> > > >
> > > > -	if (cache->len + n <= cache->flushthresh) {
> > > > +	if (n <= cache->flushthresh - cache->len) {
> > > > +		/*
> > > > +		 * The objects can be added to the cache without
> crossing
> > > the
> > > > +		 * flush threshold.
> > > > +		 */
> > > >  		cache_objs = &cache->objs[cache->len];
> > > >  		cache->len += n;
> > > > -	} else {
> > > > +	} else if (likely(n <= cache->flushthresh)) {
> > > IMO, this is a misconfiguration on the application part. In the
> PMDs I
> > > have looked at, max value of 'n' is controlled by compile time
> > > constants. Application could do a compile time check on the cache
> > > threshold or we could have another RTE_ASSERT on this.
> >
> > There could be applications using a mempool for something else than
> mbufs.
> Agree
> 
> >
> > In that case, the application should be allowed to get/put many
> objects in
> > one transaction.
> Still, this is a misconfiguration on the application. On one hand the
> threshold is configured for 'x' but they are sending a request which is
> more than 'x'. It should be possible to change the threshold
> configuration or reduce the request size.
> 
> If the application does not fix the misconfiguration, it is possible
> that it will always hit this case and does not get the benefit of using
> the per-core cache.

Correct. I suppose this is the intended behavior of this API.

The zero-copy API proposed in another patch [1] has stricter requirements to the bulk size.

[1]: http://inbox.dpdk.org/dev/20221115161822.70886-1-mb@smartsharesystems.com/T/#u

> 
> With this check, we are introducing an additional memcpy as well. I am
> not sure if reusing the latest buffers is better than having an memcpy.

There is no additional memcpy. The large bulk transfer is stored directly in the backend pool, bypassing the mempool cache.

Please note that this check is not new, it has just been moved. Before this patch, it was checked on every call (if a cache is present); with this patch, it is only checked if the entire request cannot go directly into the cache.

> 
> >
> > >
> > > > +		/*
> > > > +		 * The request itself fits into the cache.
> > > > +		 * But first, the cache must be flushed to the
> backend, so
> > > > +		 * adding the objects does not cross the flush
> threshold.
> > > > +		 */
> > > >  		cache_objs = &cache->objs[0];
> > > >  		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > > > >len);
> > > >  		cache->len = n;
> > > > +	} else {
> > > > +		/* The request itself is too big for the cache. */
> > > > +		goto driver_enqueue_stats_incremented;
> > > >  	}
> > > >
> > > >  	/* Add the objects to the cache. */ @@ -1399,13 +1403,13 @@
> > > > rte_mempool_do_generic_put(struct rte_mempool *mp, void * const
> > > > *obj_table,
> > > >
> > > >  driver_enqueue:
> > > >
> > > > -	/* increment stat now, adding in mempool always success */
> > > > +	/* Increment stats now, adding in mempool always succeeds.
> */
> > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > > >
> > > >  driver_enqueue_stats_incremented:
> > > >
> > > > -	/* push objects to the backend */
> > > > +	/* Push the objects to the backend. */
> > > >  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> > > >
> > > > --
> > > > 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-11-16 17:39         ` Morten Brørup
@ 2022-12-19  8:50           ` Morten Brørup
  2022-12-22 13:52             ` Konstantin Ananyev
  0 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-12-19  8:50 UTC (permalink / raw)
  To: olivier.matz, andrew.rybchenko, thomas
  Cc: dev, honnappa.nagarahalli, bruce.richardson, konstantin.ananyev, nd

PING mempool maintainers.

If you don't provide ACK or Review to patches, how should Thomas know that it is ready for merging?

-Morten

> From: Morten Brørup [mailto:mb@smartsharesystems.com]
> Sent: Wednesday, 16 November 2022 18.39
> 
> > From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> > Sent: Wednesday, 16 November 2022 17.27
> >
> > <snip>
> >
> > > > >
> > > > > Micro-optimization:
> > > > > Reduced the most likely code path in the generic put function
> by
> > > > moving an
> > > > > unlikely check out of the most likely code path and further
> down.
> > > > >
> > > > > Also updated the comments in the function.
> > > > >
> > > > > v2 (feedback from Andrew Rybchenko):
> > > > > * Modified comparison to prevent overflow if n is really huge
> and
> > > > > len
> > > > is
> > > > >   non-zero.
> > > > > * Added assertion about the invariant preventing overflow in
> the
> > > > >   comparison.
> > > > > * Crossing the threshold is not extremely unlikely, so removed
> > > > likely()
> > > > >   from that comparison.
> > > > >   The compiler will generate code with optimal static branch
> > > > prediction
> > > > >   here anyway.
> > > > >
> > > > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > > > ---
> > > > >  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++-----------
> --
> > ---
> > > > >  1 file changed, 20 insertions(+), 16 deletions(-)
> > > > >
> > > > > diff --git a/lib/mempool/rte_mempool.h
> > b/lib/mempool/rte_mempool.h
> > > > > index 9f530db24b..dd1a3177d6 100644
> > > > > --- a/lib/mempool/rte_mempool.h
> > > > > +++ b/lib/mempool/rte_mempool.h
> > > > > @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> > > > > rte_mempool *mp, void * const *obj_table,  {
> > > > >  	void **cache_objs;
> > > > >
> > > > > -	/* No cache provided */
> > > > > +	/* No cache provided? */
> > > > >  	if (unlikely(cache == NULL))
> > > > >  		goto driver_enqueue;
> > > > >
> > > > > -	/* increment stat now, adding in mempool always success */
> > > > > +	/* Increment stats now, adding in mempool always succeeds.
> > */
> > > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > > >
> > > > > -	/* The request itself is too big for the cache */
> > > > > -	if (unlikely(n > cache->flushthresh))
> > > > > -		goto driver_enqueue_stats_incremented;
> > > > > -
> > > > > -	/*
> > > > > -	 * The cache follows the following algorithm:
> > > > > -	 *   1. If the objects cannot be added to the cache without
> > > > crossing
> > > > > -	 *      the flush threshold, flush the cache to the
> > backend.
> > > > > -	 *   2. Add the objects to the cache.
> > > > > -	 */
> > > > > +	/* Assert the invariant preventing overflow in the
> > comparison
> > > > below.
> > > > > */
> > > > > +	RTE_ASSERT(cache->len <= cache->flushthresh);
> > > > >
> > > > > -	if (cache->len + n <= cache->flushthresh) {
> > > > > +	if (n <= cache->flushthresh - cache->len) {
> > > > > +		/*
> > > > > +		 * The objects can be added to the cache without
> > crossing
> > > > the
> > > > > +		 * flush threshold.
> > > > > +		 */
> > > > >  		cache_objs = &cache->objs[cache->len];
> > > > >  		cache->len += n;
> > > > > -	} else {
> > > > > +	} else if (likely(n <= cache->flushthresh)) {
> > > > IMO, this is a misconfiguration on the application part. In the
> > PMDs I
> > > > have looked at, max value of 'n' is controlled by compile time
> > > > constants. Application could do a compile time check on the cache
> > > > threshold or we could have another RTE_ASSERT on this.
> > >
> > > There could be applications using a mempool for something else than
> > mbufs.
> > Agree
> >
> > >
> > > In that case, the application should be allowed to get/put many
> > objects in
> > > one transaction.
> > Still, this is a misconfiguration on the application. On one hand the
> > threshold is configured for 'x' but they are sending a request which
> is
> > more than 'x'. It should be possible to change the threshold
> > configuration or reduce the request size.
> >
> > If the application does not fix the misconfiguration, it is possible
> > that it will always hit this case and does not get the benefit of
> using
> > the per-core cache.
> 
> Correct. I suppose this is the intended behavior of this API.
> 
> The zero-copy API proposed in another patch [1] has stricter
> requirements to the bulk size.
> 
> [1]: http://inbox.dpdk.org/dev/20221115161822.70886-1-
> mb@smartsharesystems.com/T/#u
> 
> >
> > With this check, we are introducing an additional memcpy as well. I
> am
> > not sure if reusing the latest buffers is better than having an
> memcpy.
> 
> There is no additional memcpy. The large bulk transfer is stored
> directly in the backend pool, bypassing the mempool cache.
> 
> Please note that this check is not new, it has just been moved. Before
> this patch, it was checked on every call (if a cache is present); with
> this patch, it is only checked if the entire request cannot go directly
> into the cache.
> 
> >
> > >
> > > >
> > > > > +		/*
> > > > > +		 * The request itself fits into the cache.
> > > > > +		 * But first, the cache must be flushed to the
> > backend, so
> > > > > +		 * adding the objects does not cross the flush
> > threshold.
> > > > > +		 */
> > > > >  		cache_objs = &cache->objs[0];
> > > > >  		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > > > > >len);
> > > > >  		cache->len = n;
> > > > > +	} else {
> > > > > +		/* The request itself is too big for the cache. */
> > > > > +		goto driver_enqueue_stats_incremented;
> > > > >  	}
> > > > >
> > > > >  	/* Add the objects to the cache. */ @@ -1399,13 +1403,13 @@
> > > > > rte_mempool_do_generic_put(struct rte_mempool *mp, void * const
> > > > > *obj_table,
> > > > >
> > > > >  driver_enqueue:
> > > > >
> > > > > -	/* increment stat now, adding in mempool always success */
> > > > > +	/* Increment stats now, adding in mempool always succeeds.
> > */
> > > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > > > >
> > > > >  driver_enqueue_stats_incremented:
> > > > >
> > > > > -	/* push objects to the backend */
> > > > > +	/* Push the objects to the backend. */
> > > > >  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> > > > >
> > > > > --
> > > > > 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-12-19  8:50           ` Morten Brørup
@ 2022-12-22 13:52             ` Konstantin Ananyev
  2022-12-22 15:02               ` Morten Brørup
  0 siblings, 1 reply; 16+ messages in thread
From: Konstantin Ananyev @ 2022-12-22 13:52 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, andrew.rybchenko, thomas
  Cc: dev, honnappa.nagarahalli, bruce.richardson, nd


Hi Morten,

> PING mempool maintainers.
> 
> If you don't provide ACK or Review to patches, how should Thomas know that it is ready for merging?
> 
> -Morten

The code changes itself looks ok to me. 
Though I don't think it would really bring any noticeable speedup.
But yes, it looks a bit nicer this way, especially with extra comments.
One question though, why do you feel this assert:
RTE_ASSERT(cache->len <= cache->flushthresh);
is necessary?
I can't see any way it could happen, unless something is totally broken
within the app.   

> > From: Morten Brørup [mailto:mb@smartsharesystems.com]
> > Sent: Wednesday, 16 November 2022 18.39
> >
> > > From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> > > Sent: Wednesday, 16 November 2022 17.27
> > >
> > >
> > > > > >
> > > > > > Micro-optimization:
> > > > > > Reduced the most likely code path in the generic put function
> > by
> > > > > moving an
> > > > > > unlikely check out of the most likely code path and further
> > down.
> > > > > >
> > > > > > Also updated the comments in the function.
> > > > > >
> > > > > > v2 (feedback from Andrew Rybchenko):
> > > > > > * Modified comparison to prevent overflow if n is really huge
> > and
> > > > > > len
> > > > > is
> > > > > >   non-zero.
> > > > > > * Added assertion about the invariant preventing overflow in
> > the
> > > > > >   comparison.
> > > > > > * Crossing the threshold is not extremely unlikely, so removed
> > > > > likely()
> > > > > >   from that comparison.
> > > > > >   The compiler will generate code with optimal static branch
> > > > > prediction
> > > > > >   here anyway.
> > > > > >
> > > > > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > > > > ---
> > > > > >  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++-----------
> > --
> > > ---
> > > > > >  1 file changed, 20 insertions(+), 16 deletions(-)
> > > > > >
> > > > > > diff --git a/lib/mempool/rte_mempool.h
> > > b/lib/mempool/rte_mempool.h
> > > > > > index 9f530db24b..dd1a3177d6 100644
> > > > > > --- a/lib/mempool/rte_mempool.h
> > > > > > +++ b/lib/mempool/rte_mempool.h
> > > > > > @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> > > > > > rte_mempool *mp, void * const *obj_table,  {
> > > > > >  	void **cache_objs;
> > > > > >
> > > > > > -	/* No cache provided */
> > > > > > +	/* No cache provided? */
> > > > > >  	if (unlikely(cache == NULL))
> > > > > >  		goto driver_enqueue;
> > > > > >
> > > > > > -	/* increment stat now, adding in mempool always success */
> > > > > > +	/* Increment stats now, adding in mempool always succeeds.
> > > */
> > > > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > > > >
> > > > > > -	/* The request itself is too big for the cache */
> > > > > > -	if (unlikely(n > cache->flushthresh))
> > > > > > -		goto driver_enqueue_stats_incremented;
> > > > > > -
> > > > > > -	/*
> > > > > > -	 * The cache follows the following algorithm:
> > > > > > -	 *   1. If the objects cannot be added to the cache without
> > > > > crossing
> > > > > > -	 *      the flush threshold, flush the cache to the
> > > backend.
> > > > > > -	 *   2. Add the objects to the cache.
> > > > > > -	 */
> > > > > > +	/* Assert the invariant preventing overflow in the
> > > comparison
> > > > > below.
> > > > > > */
> > > > > > +	RTE_ASSERT(cache->len <= cache->flushthresh);
> > > > > >
> > > > > > -	if (cache->len + n <= cache->flushthresh) {
> > > > > > +	if (n <= cache->flushthresh - cache->len) {
> > > > > > +		/*
> > > > > > +		 * The objects can be added to the cache without
> > > crossing
> > > > > the
> > > > > > +		 * flush threshold.
> > > > > > +		 */
> > > > > >  		cache_objs = &cache->objs[cache->len];
> > > > > >  		cache->len += n;
> > > > > > -	} else {
> > > > > > +	} else if (likely(n <= cache->flushthresh)) {
> > > > > IMO, this is a misconfiguration on the application part. In the
> > > PMDs I
> > > > > have looked at, max value of 'n' is controlled by compile time
> > > > > constants. Application could do a compile time check on the cache
> > > > > threshold or we could have another RTE_ASSERT on this.
> > > >
> > > > There could be applications using a mempool for something else than
> > > mbufs.
> > > Agree
> > >
> > > >
> > > > In that case, the application should be allowed to get/put many
> > > objects in
> > > > one transaction.
> > > Still, this is a misconfiguration on the application. On one hand the
> > > threshold is configured for 'x' but they are sending a request which
> > is
> > > more than 'x'. It should be possible to change the threshold
> > > configuration or reduce the request size.
> > >
> > > If the application does not fix the misconfiguration, it is possible
> > > that it will always hit this case and does not get the benefit of
> > using
> > > the per-core cache.
> >
> > Correct. I suppose this is the intended behavior of this API.
> >
> > The zero-copy API proposed in another patch [1] has stricter
> > requirements to the bulk size.
> >
> > [1]: http://inbox.dpdk.org/dev/20221115161822.70886-1-
> > mb@smartsharesystems.com/T/#u
> >
> > >
> > > With this check, we are introducing an additional memcpy as well. I
> > am
> > > not sure if reusing the latest buffers is better than having an
> > memcpy.
> >
> > There is no additional memcpy. The large bulk transfer is stored
> > directly in the backend pool, bypassing the mempool cache.
> >
> > Please note that this check is not new, it has just been moved. Before
> > this patch, it was checked on every call (if a cache is present); with
> > this patch, it is only checked if the entire request cannot go directly
> > into the cache.
> >
> > >
> > > >
> > > > >
> > > > > > +		/*
> > > > > > +		 * The request itself fits into the cache.
> > > > > > +		 * But first, the cache must be flushed to the
> > > backend, so
> > > > > > +		 * adding the objects does not cross the flush
> > > threshold.
> > > > > > +		 */
> > > > > >  		cache_objs = &cache->objs[0];
> > > > > >  		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > > > > > >len);
> > > > > >  		cache->len = n;
> > > > > > +	} else {
> > > > > > +		/* The request itself is too big for the cache. */
> > > > > > +		goto driver_enqueue_stats_incremented;
> > > > > >  	}
> > > > > >
> > > > > >  	/* Add the objects to the cache. */ @@ -1399,13 +1403,13 @@
> > > > > > rte_mempool_do_generic_put(struct rte_mempool *mp, void * const
> > > > > > *obj_table,
> > > > > >
> > > > > >  driver_enqueue:
> > > > > >
> > > > > > -	/* increment stat now, adding in mempool always success */
> > > > > > +	/* Increment stats now, adding in mempool always succeeds.
> > > */
> > > > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > > > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > > > > >
> > > > > >  driver_enqueue_stats_incremented:
> > > > > >
> > > > > > -	/* push objects to the backend */
> > > > > > +	/* Push the objects to the backend. */
> > > > > >  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> > > > > >
> > > > > > --
> > > > > > 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-12-22 13:52             ` Konstantin Ananyev
@ 2022-12-22 15:02               ` Morten Brørup
  2022-12-23 16:34                 ` Konstantin Ananyev
  0 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-12-22 15:02 UTC (permalink / raw)
  To: Konstantin Ananyev, olivier.matz, andrew.rybchenko, thomas
  Cc: dev, honnappa.nagarahalli, bruce.richardson, nd

> From: Konstantin Ananyev [mailto:konstantin.ananyev@huawei.com]
> Sent: Thursday, 22 December 2022 14.52
> 
> Hi Morten,
> 
> > PING mempool maintainers.
> >
> > If you don't provide ACK or Review to patches, how should Thomas know
> that it is ready for merging?
> >
> > -Morten
> 
> The code changes itself looks ok to me.
> Though I don't think it would really bring any noticeable speedup.
> But yes, it looks a bit nicer this way, especially with extra comments.

Agree, removing the compare and branch instructions from the likely code path provides no noticeable speedup, but makes the code more clean.

> One question though, why do you feel this assert:
> RTE_ASSERT(cache->len <= cache->flushthresh);
> is necessary?

I could have written it into the comment above it, but instead chose to add the assertion as an improved comment.

> I can't see any way it could happen, unless something is totally broken
> within the app.

Agree. These are the circumstances where assertions can come into action. With more assertions in the code, it is likely that less code executes before hitting an assertion, making it easier to find the root cause of such errors.

Please note that RTE_ASSERTs are omitted unless built with RTE_ENABLE_ASSERT, so this assertion is omitted and thus has no performance impact for a normal build.

> 
> > > From: Morten Brørup [mailto:mb@smartsharesystems.com]
> > > Sent: Wednesday, 16 November 2022 18.39
> > >
> > > > From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> > > > Sent: Wednesday, 16 November 2022 17.27
> > > >
> > > >
> > > > > > >
> > > > > > > Micro-optimization:
> > > > > > > Reduced the most likely code path in the generic put
> function
> > > by
> > > > > > moving an
> > > > > > > unlikely check out of the most likely code path and further
> > > down.
> > > > > > >
> > > > > > > Also updated the comments in the function.
> > > > > > >
> > > > > > > v2 (feedback from Andrew Rybchenko):
> > > > > > > * Modified comparison to prevent overflow if n is really
> huge
> > > and
> > > > > > > len
> > > > > > is
> > > > > > >   non-zero.
> > > > > > > * Added assertion about the invariant preventing overflow
> in
> > > the
> > > > > > >   comparison.
> > > > > > > * Crossing the threshold is not extremely unlikely, so
> removed
> > > > > > likely()
> > > > > > >   from that comparison.
> > > > > > >   The compiler will generate code with optimal static
> branch
> > > > > > prediction
> > > > > > >   here anyway.
> > > > > > >
> > > > > > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > > > > > ---
> > > > > > >  lib/mempool/rte_mempool.h | 36 ++++++++++++++++++++-------
> ----
> > > --
> > > > ---
> > > > > > >  1 file changed, 20 insertions(+), 16 deletions(-)
> > > > > > >
> > > > > > > diff --git a/lib/mempool/rte_mempool.h
> > > > b/lib/mempool/rte_mempool.h
> > > > > > > index 9f530db24b..dd1a3177d6 100644
> > > > > > > --- a/lib/mempool/rte_mempool.h
> > > > > > > +++ b/lib/mempool/rte_mempool.h
> > > > > > > @@ -1364,32 +1364,36 @@ rte_mempool_do_generic_put(struct
> > > > > > > rte_mempool *mp, void * const *obj_table,  {
> > > > > > >  	void **cache_objs;
> > > > > > >
> > > > > > > -	/* No cache provided */
> > > > > > > +	/* No cache provided? */
> > > > > > >  	if (unlikely(cache == NULL))
> > > > > > >  		goto driver_enqueue;
> > > > > > >
> > > > > > > -	/* increment stat now, adding in mempool always
> success */
> > > > > > > +	/* Increment stats now, adding in mempool always
> succeeds.
> > > > */
> > > > > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > > > >  	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > > > > >
> > > > > > > -	/* The request itself is too big for the cache */
> > > > > > > -	if (unlikely(n > cache->flushthresh))
> > > > > > > -		goto driver_enqueue_stats_incremented;
> > > > > > > -
> > > > > > > -	/*
> > > > > > > -	 * The cache follows the following algorithm:
> > > > > > > -	 *   1. If the objects cannot be added to the cache
> without
> > > > > > crossing
> > > > > > > -	 *      the flush threshold, flush the cache to the
> > > > backend.
> > > > > > > -	 *   2. Add the objects to the cache.
> > > > > > > -	 */
> > > > > > > +	/* Assert the invariant preventing overflow in the
> > > > comparison
> > > > > > below.
> > > > > > > */
> > > > > > > +	RTE_ASSERT(cache->len <= cache->flushthresh);
> > > > > > >
> > > > > > > -	if (cache->len + n <= cache->flushthresh) {
> > > > > > > +	if (n <= cache->flushthresh - cache->len) {
> > > > > > > +		/*
> > > > > > > +		 * The objects can be added to the cache
> without
> > > > crossing
> > > > > > the
> > > > > > > +		 * flush threshold.
> > > > > > > +		 */
> > > > > > >  		cache_objs = &cache->objs[cache->len];
> > > > > > >  		cache->len += n;
> > > > > > > -	} else {
> > > > > > > +	} else if (likely(n <= cache->flushthresh)) {
> > > > > > IMO, this is a misconfiguration on the application part. In
> the
> > > > PMDs I
> > > > > > have looked at, max value of 'n' is controlled by compile
> time
> > > > > > constants. Application could do a compile time check on the
> cache
> > > > > > threshold or we could have another RTE_ASSERT on this.
> > > > >
> > > > > There could be applications using a mempool for something else
> than
> > > > mbufs.
> > > > Agree
> > > >
> > > > >
> > > > > In that case, the application should be allowed to get/put many
> > > > objects in
> > > > > one transaction.
> > > > Still, this is a misconfiguration on the application. On one hand
> the
> > > > threshold is configured for 'x' but they are sending a request
> which
> > > is
> > > > more than 'x'. It should be possible to change the threshold
> > > > configuration or reduce the request size.
> > > >
> > > > If the application does not fix the misconfiguration, it is
> possible
> > > > that it will always hit this case and does not get the benefit of
> > > using
> > > > the per-core cache.
> > >
> > > Correct. I suppose this is the intended behavior of this API.
> > >
> > > The zero-copy API proposed in another patch [1] has stricter
> > > requirements to the bulk size.
> > >
> > > [1]: http://inbox.dpdk.org/dev/20221115161822.70886-1-
> > > mb@smartsharesystems.com/T/#u
> > >
> > > >
> > > > With this check, we are introducing an additional memcpy as well.
> I
> > > am
> > > > not sure if reusing the latest buffers is better than having an
> > > memcpy.
> > >
> > > There is no additional memcpy. The large bulk transfer is stored
> > > directly in the backend pool, bypassing the mempool cache.
> > >
> > > Please note that this check is not new, it has just been moved.
> Before
> > > this patch, it was checked on every call (if a cache is present);
> with
> > > this patch, it is only checked if the entire request cannot go
> directly
> > > into the cache.
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > +		/*
> > > > > > > +		 * The request itself fits into the cache.
> > > > > > > +		 * But first, the cache must be flushed to the
> > > > backend, so
> > > > > > > +		 * adding the objects does not cross the flush
> > > > threshold.
> > > > > > > +		 */
> > > > > > >  		cache_objs = &cache->objs[0];
> > > > > > >  		rte_mempool_ops_enqueue_bulk(mp, cache_objs,
> cache-
> > > > > > > >len);
> > > > > > >  		cache->len = n;
> > > > > > > +	} else {
> > > > > > > +		/* The request itself is too big for the cache.
> */
> > > > > > > +		goto driver_enqueue_stats_incremented;
> > > > > > >  	}
> > > > > > >
> > > > > > >  	/* Add the objects to the cache. */ @@ -1399,13
> +1403,13 @@
> > > > > > > rte_mempool_do_generic_put(struct rte_mempool *mp, void *
> const
> > > > > > > *obj_table,
> > > > > > >
> > > > > > >  driver_enqueue:
> > > > > > >
> > > > > > > -	/* increment stat now, adding in mempool always
> success */
> > > > > > > +	/* Increment stats now, adding in mempool always
> succeeds.
> > > > */
> > > > > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > > > > > >  	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > > > > > >
> > > > > > >  driver_enqueue_stats_incremented:
> > > > > > >
> > > > > > > -	/* push objects to the backend */
> > > > > > > +	/* Push the objects to the backend. */
> > > > > > >  	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);  }
> > > > > > >
> > > > > > > --
> > > > > > > 2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2] mempool: micro-optimize put function
  2022-12-22 15:02               ` Morten Brørup
@ 2022-12-23 16:34                 ` Konstantin Ananyev
  0 siblings, 0 replies; 16+ messages in thread
From: Konstantin Ananyev @ 2022-12-23 16:34 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, andrew.rybchenko, thomas
  Cc: dev, honnappa.nagarahalli, bruce.richardson, nd



> > Hi Morten,
> >
> > > PING mempool maintainers.
> > >
> > > If you don't provide ACK or Review to patches, how should Thomas know
> > that it is ready for merging?
> > >
> > > -Morten
> >
> > The code changes itself looks ok to me.
> > Though I don't think it would really bring any noticeable speedup.
> > But yes, it looks a bit nicer this way, especially with extra comments.
> 
> Agree, removing the compare and branch instructions from the likely code path provides no noticeable speedup, but makes the code
> more clean.
> 
> > One question though, why do you feel this assert:
> > RTE_ASSERT(cache->len <= cache->flushthresh);
> > is necessary?
> 
> I could have written it into the comment above it, but instead chose to add the assertion as an improved comment.
> 
> > I can't see any way it could happen, unless something is totally broken
> > within the app.
> 
> Agree. These are the circumstances where assertions can come into action. With more assertions in the code, it is likely that less code
> executes before hitting an assertion, making it easier to find the root cause of such errors.
> 
> Please note that RTE_ASSERTs are omitted unless built with RTE_ENABLE_ASSERT, so this assertion is omitted and thus has no
> performance impact for a normal build.

I am aware that RTE_ASSERT is not enabled by default.
My question was more about - why do you feel assert() is necessary in that case in particular.
Does it guard for some specific scenario or so.
Anyway, I am ok either way: with and without assert() here.
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3] mempool: micro-optimize put function
  2022-11-16 10:18 [PATCH] mempool: micro-optimize put function Morten Brørup
  2022-11-16 11:04 ` Andrew Rybchenko
  2022-11-16 12:14 ` [PATCH v2] " Morten Brørup
@ 2022-12-24 10:46 ` Morten Brørup
  2022-12-27  8:54   ` Andrew Rybchenko
  2 siblings, 1 reply; 16+ messages in thread
From: Morten Brørup @ 2022-12-24 10:46 UTC (permalink / raw)
  To: olivier.matz, andrew.rybchenko, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev,
	Morten Brørup

Micro-optimization:
Reduced the most likely code path in the generic put function by moving an
unlikely check out of the most likely code path and further down.

Also updated the comments in the function.

v3 (feedback from Konstantin Ananyev):
* Removed assertion and comment about the invariant preventing overflow
  in the comparison. They were more confusing than enlightening.
v2 (feedback from Andrew Rybchenko):
* Modified comparison to prevent overflow if n is really huge and len is
  non-zero.
* Added assertion about the invariant preventing overflow in the
  comparison.
* Crossing the threshold is not extremely unlikely, so removed likely()
  from that comparison.
  The compiler will generate code with optimal static branch prediction
  here anyway.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 9f530db24b..61ca0c6b65 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
 {
 	void **cache_objs;
 
-	/* No cache provided */
+	/* No cache provided? */
 	if (unlikely(cache == NULL))
 		goto driver_enqueue;
 
-	/* increment stat now, adding in mempool always success */
+	/* Increment stats now, adding in mempool always succeeds. */
 	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
 	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
 
-	/* The request itself is too big for the cache */
-	if (unlikely(n > cache->flushthresh))
-		goto driver_enqueue_stats_incremented;
-
-	/*
-	 * The cache follows the following algorithm:
-	 *   1. If the objects cannot be added to the cache without crossing
-	 *      the flush threshold, flush the cache to the backend.
-	 *   2. Add the objects to the cache.
-	 */
-
-	if (cache->len + n <= cache->flushthresh) {
+	if (n <= cache->flushthresh - cache->len) {
+		/*
+		 * The objects can be added to the cache without crossing the
+		 * flush threshold.
+		 */
 		cache_objs = &cache->objs[cache->len];
 		cache->len += n;
-	} else {
+	} else if (likely(n <= cache->flushthresh)) {
+		/*
+		 * The request itself fits into the cache.
+		 * But first, the cache must be flushed to the backend, so
+		 * adding the objects does not cross the flush threshold.
+		 */
 		cache_objs = &cache->objs[0];
 		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
 		cache->len = n;
+	} else {
+		/* The request itself is too big for the cache. */
+		goto driver_enqueue_stats_incremented;
 	}
 
 	/* Add the objects to the cache. */
@@ -1399,13 +1400,13 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
 
 driver_enqueue:
 
-	/* increment stat now, adding in mempool always success */
+	/* Increment stats now, adding in mempool always succeeds. */
 	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
 	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
 
 driver_enqueue_stats_incremented:
 
-	/* push objects to the backend */
+	/* Push the objects to the backend. */
 	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3] mempool: micro-optimize put function
  2022-12-24 10:46 ` [PATCH v3] " Morten Brørup
@ 2022-12-27  8:54   ` Andrew Rybchenko
  2022-12-27 15:37     ` Morten Brørup
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Rybchenko @ 2022-12-27  8:54 UTC (permalink / raw)
  To: Morten Brørup, olivier.matz, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev

On 12/24/22 13:46, Morten Brørup wrote:
> Micro-optimization:
> Reduced the most likely code path in the generic put function by moving an
> unlikely check out of the most likely code path and further down.
> 
> Also updated the comments in the function.
> 
> v3 (feedback from Konstantin Ananyev):
> * Removed assertion and comment about the invariant preventing overflow
>    in the comparison. They were more confusing than enlightening.
> v2 (feedback from Andrew Rybchenko):
> * Modified comparison to prevent overflow if n is really huge and len is
>    non-zero.
> * Added assertion about the invariant preventing overflow in the
>    comparison.
> * Crossing the threshold is not extremely unlikely, so removed likely()
>    from that comparison.
>    The compiler will generate code with optimal static branch prediction
>    here anyway.
> 
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

Thanks for optimizing it further.

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

> ---
>   lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
>   1 file changed, 18 insertions(+), 17 deletions(-)
> 
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 9f530db24b..61ca0c6b65 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
>   {
>   	void **cache_objs;
>   
> -	/* No cache provided */
> +	/* No cache provided? */

IMHO such changes do not add value and just add noise.
There are few similar cases below.
No strong opinion in any case.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v3] mempool: micro-optimize put function
  2022-12-27  8:54   ` Andrew Rybchenko
@ 2022-12-27 15:37     ` Morten Brørup
  0 siblings, 0 replies; 16+ messages in thread
From: Morten Brørup @ 2022-12-27 15:37 UTC (permalink / raw)
  To: Andrew Rybchenko, olivier.matz, dev
  Cc: honnappa.nagarahalli, bruce.richardson, konstantin.ananyev

> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
> Sent: Tuesday, 27 December 2022 09.54
> 
> On 12/24/22 13:46, Morten Brørup wrote:
> > Micro-optimization:
> > Reduced the most likely code path in the generic put function by
> moving an
> > unlikely check out of the most likely code path and further down.
> >
> > Also updated the comments in the function.
> >
> > v3 (feedback from Konstantin Ananyev):
> > * Removed assertion and comment about the invariant preventing
> overflow
> >    in the comparison. They were more confusing than enlightening.
> > v2 (feedback from Andrew Rybchenko):
> > * Modified comparison to prevent overflow if n is really huge and len
> is
> >    non-zero.
> > * Added assertion about the invariant preventing overflow in the
> >    comparison.
> > * Crossing the threshold is not extremely unlikely, so removed
> likely()
> >    from that comparison.
> >    The compiler will generate code with optimal static branch
> prediction
> >    here anyway.
> >
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> Thanks for optimizing it further.
> 
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> > ---
> >   lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
> >   1 file changed, 18 insertions(+), 17 deletions(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 9f530db24b..61ca0c6b65 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
> >   {
> >   	void **cache_objs;
> >
> > -	/* No cache provided */
> > +	/* No cache provided? */
> 
> IMHO such changes do not add value and just add noise.
> There are few similar cases below.
> No strong opinion in any case.
> 

This patch is obsolete, because the zero-copy patch v5 [1] uses the zero-copy put function, which is optimized similarly, in rte_mempool_do_generic_put().

[1]: https://patchwork.dpdk.org/project/dpdk/patch/20221227151700.80887-1-mb@smartsharesystems.com/


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-12-27 15:37 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-16 10:18 [PATCH] mempool: micro-optimize put function Morten Brørup
2022-11-16 11:04 ` Andrew Rybchenko
2022-11-16 11:10   ` Morten Brørup
2022-11-16 11:29     ` Andrew Rybchenko
2022-11-16 12:14 ` [PATCH v2] " Morten Brørup
2022-11-16 15:51   ` Honnappa Nagarahalli
2022-11-16 15:59     ` Morten Brørup
2022-11-16 16:26       ` Honnappa Nagarahalli
2022-11-16 17:39         ` Morten Brørup
2022-12-19  8:50           ` Morten Brørup
2022-12-22 13:52             ` Konstantin Ananyev
2022-12-22 15:02               ` Morten Brørup
2022-12-23 16:34                 ` Konstantin Ananyev
2022-12-24 10:46 ` [PATCH v3] " Morten Brørup
2022-12-27  8:54   ` Andrew Rybchenko
2022-12-27 15:37     ` Morten Brørup

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.