All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty
@ 2015-03-20 10:46 linhaifeng
  2015-03-20 16:54 ` Xie, Huawei
  0 siblings, 1 reply; 5+ messages in thread
From: linhaifeng @ 2015-03-20 10:46 UTC (permalink / raw)
  To: dev-VfR2kkLFssw

From: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

If failed to alloc mbuf ring_size times the rx_q may be empty and can't
receive any packets forever because nb_used is 0 forever.

so we should try to refill when nb_used is 0.After otherone free mbuf
we can restart to receive packets.

Signed-off-by: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
index 1d74b34..5c7e0cd 100644
--- a/lib/librte_pmd_virtio/virtio_rxtx.c
+++ b/lib/librte_pmd_virtio/virtio_rxtx.c
@@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
 
 	if (num == 0)
-		return 0;
+		goto refill;
 
 	num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
 	PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
@@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	rxvq->packets += nb_rx;
 
+refill:
 	/* Allocate new mbuf for the used descriptor */
 	error = ENOSPC;
 	while (likely(!virtqueue_full(rxvq))) {
-- 
1.8.5.2.msysgit.0

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty
  2015-03-20 10:46 [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty linhaifeng
@ 2015-03-20 16:54 ` Xie, Huawei
       [not found]   ` <C37D651A908B024F974696C65296B57B0F3EC863-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Xie, Huawei @ 2015-03-20 16:54 UTC (permalink / raw)
  To: linhaifeng, dev-VfR2kkLFssw

On 3/20/2015 6:47 PM, linhaifeng wrote:
> From: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>
> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
> receive any packets forever because nb_used is 0 forever.
Agreed. In current implementation, once VQ becomes empty, we have no
chance to refill it again.
The simple fix is, receive one and then refill one as other PMDs. Need
to consider which is best strategy in terms of performance in future.
How did you find this? through code review or real workload?
> so we should try to refill when nb_used is 0.After otherone free mbuf
> we can restart to receive packets.
>
> Signed-off-by: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> ---
>  lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
> index 1d74b34..5c7e0cd 100644
> --- a/lib/librte_pmd_virtio/virtio_rxtx.c
> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c
> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>  		num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
>  
>  	if (num == 0)
> -		return 0;
> +		goto refill;
>  
>  	num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
>  	PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>  
>  	rxvq->packets += nb_rx;
>  
> +refill:
>  	/* Allocate new mbuf for the used descriptor */
>  	error = ENOSPC;
>  	while (likely(!virtqueue_full(rxvq))) {


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty
       [not found]   ` <C37D651A908B024F974696C65296B57B0F3EC863-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2015-03-21  1:30     ` Linhaifeng
  2015-03-21  2:25     ` Linhaifeng
  1 sibling, 0 replies; 5+ messages in thread
From: Linhaifeng @ 2015-03-21  1:30 UTC (permalink / raw)
  To: Xie, Huawei, dev-VfR2kkLFssw



On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In current implementation, once VQ becomes empty, we have no
> chance to refill it again.
> The simple fix is, receive one and then refill one as other PMDs. Need
> to consider which is best strategy in terms of performance in future.
> How did you find this? through code review or real workload?

We found this through real workload which use vhost_net + virtio_pmd to forward packets in VM.

>> so we should try to refill when nb_used is 0.After otherone free mbuf
>> we can restart to receive packets.
>>
>> Signed-off-by: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> ---
>>  lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
>> index 1d74b34..5c7e0cd 100644
>> --- a/lib/librte_pmd_virtio/virtio_rxtx.c
>> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c
>> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>>  		num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
>>  
>>  	if (num == 0)
>> -		return 0;
>> +		goto refill;
>>  
>>  	num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
>>  	PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
>> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>>  
>>  	rxvq->packets += nb_rx;
>>  
>> +refill:
>>  	/* Allocate new mbuf for the used descriptor */
>>  	error = ENOSPC;
>>  	while (likely(!virtqueue_full(rxvq))) {
> 
> 
> 

-- 
Regards,
Haifeng

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty
       [not found]   ` <C37D651A908B024F974696C65296B57B0F3EC863-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  2015-03-21  1:30     ` Linhaifeng
@ 2015-03-21  2:25     ` Linhaifeng
  2015-03-24 10:10       ` Xie, Huawei
  1 sibling, 1 reply; 5+ messages in thread
From: Linhaifeng @ 2015-03-21  2:25 UTC (permalink / raw)
  To: Xie, Huawei, dev-VfR2kkLFssw



On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In current implementation, once VQ becomes empty, we have no
> chance to refill it again.
> The simple fix is, receive one and then refill one as other PMDs. Need

"Receive one and then refill one" also have this problem.If refill also
failed the VQ would be empty too.

> to consider which is best strategy in terms of performance in future.
> How did you find this? through code review or real workload?
>> so we should try to refill when nb_used is 0.After otherone free mbuf
>> we can restart to receive packets.
>>
>> Signed-off-by: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> ---
>>  lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
>> index 1d74b34..5c7e0cd 100644
>> --- a/lib/librte_pmd_virtio/virtio_rxtx.c
>> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c
>> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>>  		num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
>>  
>>  	if (num == 0)
>> -		return 0;
>> +		goto refill;
>>  
>>  	num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
>>  	PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
>> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>>  
>>  	rxvq->packets += nb_rx;
>>  
>> +refill:
>>  	/* Allocate new mbuf for the used descriptor */
>>  	error = ENOSPC;
>>  	while (likely(!virtqueue_full(rxvq))) {
> 
> 
> 

-- 
Regards,
Haifeng

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty
  2015-03-21  2:25     ` Linhaifeng
@ 2015-03-24 10:10       ` Xie, Huawei
  0 siblings, 0 replies; 5+ messages in thread
From: Xie, Huawei @ 2015-03-24 10:10 UTC (permalink / raw)
  To: Linhaifeng, dev-VfR2kkLFssw

On 3/21/2015 10:25 AM, Linhaifeng wrote:
>
> On 2015/3/21 0:54, Xie, Huawei wrote:
>> On 3/20/2015 6:47 PM, linhaifeng wrote:
>>> From: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>>
>>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>>> receive any packets forever because nb_used is 0 forever.
>> Agreed. In current implementation, once VQ becomes empty, we have no
>> chance to refill it again.
>> The simple fix is, receive one and then refill one as other PMDs. Need
> "Receive one and then refill one" also have this problem.If refill also
> failed the VQ would be empty too.
Correction: refill one and receive one on success of refill
>
>> to consider which is best strategy in terms of performance in future.
>> How did you find this? through code review or real workload?
>>> so we should try to refill when nb_used is 0.After otherone free mbuf
>>> we can restart to receive packets.
>>>
>>> Signed-off-by: Linhaifeng <haifeng.lin-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>> ---
>>>  lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
>>> index 1d74b34..5c7e0cd 100644
>>> --- a/lib/librte_pmd_virtio/virtio_rxtx.c
>>> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c
>>> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>>>  		num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
>>>  
>>>  	if (num == 0)
>>> -		return 0;
>>> +		goto refill;
>>>  
>>>  	num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num);
>>>  	PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
>>> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
>>>  
>>>  	rxvq->packets += nb_rx;
>>>  
>>> +refill:
>>>  	/* Allocate new mbuf for the used descriptor */
>>>  	error = ENOSPC;
>>>  	while (likely(!virtqueue_full(rxvq))) {
>>
>>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-03-24 10:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-20 10:46 [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty linhaifeng
2015-03-20 16:54 ` Xie, Huawei
     [not found]   ` <C37D651A908B024F974696C65296B57B0F3EC863-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2015-03-21  1:30     ` Linhaifeng
2015-03-21  2:25     ` Linhaifeng
2015-03-24 10:10       ` Xie, Huawei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.