All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: RFS issue: no HW filter for paused stream
       [not found] <CAP7N4Kfd9TOPr_V6+R9yVtK80eGWbkAvPE8BStG0+WhvJHaLGg@mail.gmail.com>
@ 2011-09-19  6:05 ` Amir Vadai
  2011-09-19 15:13   ` Tom Herbert
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Vadai @ 2011-09-19  6:05 UTC (permalink / raw)
  To: Tom Herbert; +Cc: oren, liranl, netdev, amirv

(Resending in plain text)

Tom Hi,
When a stream is paused, and its rule is expired while it is paused,
no new rule will be configured to the HW when traffic resume.

Scenario:
1. Start iperf.
2. Pause it using Ctrl-Z
3. Start another iperf (to make sure first stream rule is expired)
4. Stop the second stream.
5. Resume first stream. Traffic is not steered to the right rx-queue.

From looking at the code:
- When first stream started, RSS steered traffic to rx-queue 'x'.
Because iperf server was running on a different CPU, a new rule was
added and current-cpu was set to desired-cpu.
- After paused, rule was expired and removed from HW by net driver.
But current-cpu wasn't cleared and still is equal to desired-cpu.
- When stream was resumed, traffic was steered again by RSS, and
because current-cpu was equal to desired-cpu,  ndo_rx_flow_steer
wasn't called and no rule was configured to the HW.

Why isn't current-cpu cleared when expiring a rule?
- Amir

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-19  6:05 ` RFS issue: no HW filter for paused stream Amir Vadai
@ 2011-09-19 15:13   ` Tom Herbert
  2011-09-19 15:52     ` Ben Hutchings
  0 siblings, 1 reply; 8+ messages in thread
From: Tom Herbert @ 2011-09-19 15:13 UTC (permalink / raw)
  To: Amir Vadai; +Cc: oren, liranl, netdev, amirv, Ben Hutchings

Ben:  Once a accel RFS flow expires (because flow is idle?), how
should it get re-instantiated if thread's CPU doesn't change?

Tom

On Sun, Sep 18, 2011 at 11:05 PM, Amir Vadai <amirv@dev.mellanox.co.il> wrote:
> (Resending in plain text)
>
> Tom Hi,
> When a stream is paused, and its rule is expired while it is paused,
> no new rule will be configured to the HW when traffic resume.
>
> Scenario:
> 1. Start iperf.
> 2. Pause it using Ctrl-Z
> 3. Start another iperf (to make sure first stream rule is expired)
> 4. Stop the second stream.
> 5. Resume first stream. Traffic is not steered to the right rx-queue.
>
> From looking at the code:
> - When first stream started, RSS steered traffic to rx-queue 'x'.
> Because iperf server was running on a different CPU, a new rule was
> added and current-cpu was set to desired-cpu.
> - After paused, rule was expired and removed from HW by net driver.
> But current-cpu wasn't cleared and still is equal to desired-cpu.
> - When stream was resumed, traffic was steered again by RSS, and
> because current-cpu was equal to desired-cpu,  ndo_rx_flow_steer
> wasn't called and no rule was configured to the HW.
>
> Why isn't current-cpu cleared when expiring a rule?
> - Amir
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-19 15:13   ` Tom Herbert
@ 2011-09-19 15:52     ` Ben Hutchings
  2011-09-20  6:53       ` Amir Vadai
  0 siblings, 1 reply; 8+ messages in thread
From: Ben Hutchings @ 2011-09-19 15:52 UTC (permalink / raw)
  To: Tom Herbert, Amir Vadai; +Cc: oren, liranl, netdev, amirv

On Mon, 2011-09-19 at 08:13 -0700, Tom Herbert wrote:
> Ben:  Once a accel RFS flow expires (because flow is idle?), how
> should it get re-instantiated if thread's CPU doesn't change?

Good question.

> Tom
> 
> On Sun, Sep 18, 2011 at 11:05 PM, Amir Vadai <amirv@dev.mellanox.co.il> wrote:
> > (Resending in plain text)
> >
> > Tom Hi,
> > When a stream is paused, and its rule is expired while it is paused,
> > no new rule will be configured to the HW when traffic resume.
> >
> > Scenario:
> > 1. Start iperf.
> > 2. Pause it using Ctrl-Z
> > 3. Start another iperf (to make sure first stream rule is expired)
> > 4. Stop the second stream.
> > 5. Resume first stream. Traffic is not steered to the right rx-queue.
> >
> > From looking at the code:
> > - When first stream started, RSS steered traffic to rx-queue 'x'.
> > Because iperf server was running on a different CPU, a new rule was
> > added and current-cpu was set to desired-cpu.
> > - After paused, rule was expired and removed from HW by net driver.
> > But current-cpu wasn't cleared and still is equal to desired-cpu.
> > - When stream was resumed, traffic was steered again by RSS, and
> > because current-cpu was equal to desired-cpu,  ndo_rx_flow_steer
> > wasn't called and no rule was configured to the HW.
> >
> > Why isn't current-cpu cleared when expiring a rule?

Because I wrongly assumed that rules could be independently expired by
the driver and the RPS/RFS core code.

Try this (I haven't tested it myself yet):

From: Ben Hutchings <bhutchings@solarflare.com>
Date: Mon, 19 Sep 2011 16:44:13 +0100
Subject: [PATCH net-next] RPS: When a hardware filter is expired, ensure it
 can be re-added later

Amir Vadai wrote:
> When a stream is paused, and its rule is expired while it is paused,
> no new rule will be configured to the HW when traffic resume.
[...]
> - When stream was resumed, traffic was steered again by RSS, and
> because current-cpu was equal to desired-cpu,  ndo_rx_flow_steer
> wasn't called and no rule was configured to the HW.
>
> Why isn't current-cpu cleared when expiring a rule?

When rps_may_expire_flow() matches a filter to a flow that is found to
be idle, unset the current CPU for that flow.

Reported-by: Amir Vadai <amirv@dev.mellanox.co.il>
---
 net/core/dev.c |   21 ++++++++++++++++-----
 1 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index b2e262e..3caf65a 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2817,11 +2817,22 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
 	if (flow_table && flow_id <= flow_table->mask) {
 		rflow = &flow_table->flows[flow_id];
 		cpu = ACCESS_ONCE(rflow->cpu);
-		if (rflow->filter == filter_id && cpu != RPS_NO_CPU &&
-		    ((int)(per_cpu(softnet_data, cpu).input_queue_head -
-			   rflow->last_qtail) <
-		     (int)(10 * flow_table->mask)))
-			expire = false;
+		if (rflow->filter == filter_id && cpu != RPS_NO_CPU) {
+			if ((int)(per_cpu(softnet_data, cpu).input_queue_head -
+				  rflow->last_qtail) <
+			    (int)(10 * flow_table->mask)) {
+				expire = false;
+			} else {
+				/* If this flow (or a flow with the
+				 * same hash value) becomes active
+				 * on the CPU as before, we want to
+				 * restore the hardware filter.  Unset
+				 * the current CPU to ensure that
+				 * set_rps_cpu() will be called then.
+				 */
+				rflow->cpu = RPS_NO_CPU;
+			}
+		}
 	}
 	rcu_read_unlock();
 	return expire;
-- 
1.7.4.4

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-19 15:52     ` Ben Hutchings
@ 2011-09-20  6:53       ` Amir Vadai
  2011-09-21 15:09         ` Ben Hutchings
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Vadai @ 2011-09-20  6:53 UTC (permalink / raw)
  To: Ben Hutchings; +Cc: Tom Herbert, oren, liranl, netdev, amirv, Diego Crupnicoff

This will unset the current CPU of the rflow that belongs to the desired 
CPU.
The problem is when the stream resumes and it goes to the wrong RXQ - in 
our HW, it will be according to RSS, as long as there is no specific 
flow steering rule for the stream.
We need to unset the current CPU of the rflow of the actual RXQ that the 
packet arrived at:

diff --git a/net/core/dev.c b/net/core/dev.c
index 4b9981c..a6b3bc8 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2685,6 +2685,12 @@ set_rps_cpu(struct net_device *dev, struct 
sk_buff *skb,
                 rflow = &flow_table->flows[flow_id];
                 rflow->cpu = next_cpu;
                 rflow->filter = rc;
+         /* If this flow (or a flow with the same hash value) becomes
+          * active on the CPU as before, we want to restore the
+          * hardware filter.  Unset the current CPU to ensure that
+          * set_rps_cpu() will be called then.
+          */
+         old_rflow->cpu = RPS_NO_CPU;
                 if (old_rflow->filter == rflow->filter)
                         old_rflow->filter = RPS_NO_FILTER;
         out:

Or even better, not set it in the first place - but I'm not sure I 
undersdtand the implications on RPS:

diff --git a/net/core/dev.c b/net/core/dev.c
index 4b9981c..748acdb 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2654,7 +2654,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff 
*skb,
  {
         u16 tcpu;

-   tcpu = rflow->cpu = next_cpu;
+ tcpu = next_cpu;
         if (tcpu != RPS_NO_CPU) {
  #ifdef CONFIG_RFS_ACCEL
                 struct netdev_rx_queue *rxqueue;



- Amir

On 09/19/2011 06:52 PM, Ben Hutchings wrote:
> On Mon, 2011-09-19 at 08:13 -0700, Tom Herbert wrote:
>> Ben:  Once a accel RFS flow expires (because flow is idle?), how
>> should it get re-instantiated if thread's CPU doesn't change?
>
> Good question.
>
>> Tom
>>
>> On Sun, Sep 18, 2011 at 11:05 PM, Amir Vadai<amirv@dev.mellanox.co.il>  wrote:
>>> (Resending in plain text)
>>>
>>> Tom Hi,
>>> When a stream is paused, and its rule is expired while it is paused,
>>> no new rule will be configured to the HW when traffic resume.
>>>
>>> Scenario:
>>> 1. Start iperf.
>>> 2. Pause it using Ctrl-Z
>>> 3. Start another iperf (to make sure first stream rule is expired)
>>> 4. Stop the second stream.
>>> 5. Resume first stream. Traffic is not steered to the right rx-queue.
>>>
>>>  From looking at the code:
>>> - When first stream started, RSS steered traffic to rx-queue 'x'.
>>> Because iperf server was running on a different CPU, a new rule was
>>> added and current-cpu was set to desired-cpu.
>>> - After paused, rule was expired and removed from HW by net driver.
>>> But current-cpu wasn't cleared and still is equal to desired-cpu.
>>> - When stream was resumed, traffic was steered again by RSS, and
>>> because current-cpu was equal to desired-cpu,  ndo_rx_flow_steer
>>> wasn't called and no rule was configured to the HW.
>>>
>>> Why isn't current-cpu cleared when expiring a rule?
>
> Because I wrongly assumed that rules could be independently expired by
> the driver and the RPS/RFS core code.
>
> Try this (I haven't tested it myself yet):
>
> From: Ben Hutchings<bhutchings@solarflare.com>
> Date: Mon, 19 Sep 2011 16:44:13 +0100
> Subject: [PATCH net-next] RPS: When a hardware filter is expired, ensure it
>   can be re-added later
>
> Amir Vadai wrote:
>> When a stream is paused, and its rule is expired while it is paused,
>> no new rule will be configured to the HW when traffic resume.
> [...]
>> - When stream was resumed, traffic was steered again by RSS, and
>> because current-cpu was equal to desired-cpu,  ndo_rx_flow_steer
>> wasn't called and no rule was configured to the HW.
>>
>> Why isn't current-cpu cleared when expiring a rule?
>
> When rps_may_expire_flow() matches a filter to a flow that is found to
> be idle, unset the current CPU for that flow.
>
> Reported-by: Amir Vadai<amirv@dev.mellanox.co.il>
> ---
>   net/core/dev.c |   21 ++++++++++++++++-----
>   1 files changed, 16 insertions(+), 5 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index b2e262e..3caf65a 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -2817,11 +2817,22 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
>   	if (flow_table&&  flow_id<= flow_table->mask) {
>   		rflow =&flow_table->flows[flow_id];
>   		cpu = ACCESS_ONCE(rflow->cpu);
> -		if (rflow->filter == filter_id&&  cpu != RPS_NO_CPU&&
> -		    ((int)(per_cpu(softnet_data, cpu).input_queue_head -
> -			   rflow->last_qtail)<
> -		     (int)(10 * flow_table->mask)))
> -			expire = false;
> +		if (rflow->filter == filter_id&&  cpu != RPS_NO_CPU) {
> +			if ((int)(per_cpu(softnet_data, cpu).input_queue_head -
> +				  rflow->last_qtail)<
> +			    (int)(10 * flow_table->mask)) {
> +				expire = false;
> +			} else {
> +				/* If this flow (or a flow with the
> +				 * same hash value) becomes active
> +				 * on the CPU as before, we want to
> +				 * restore the hardware filter.  Unset
> +				 * the current CPU to ensure that
> +				 * set_rps_cpu() will be called then.
> +				 */
> +				rflow->cpu = RPS_NO_CPU;
> +			}
> +		}
>   	}
>   	rcu_read_unlock();
>   	return expire;

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-20  6:53       ` Amir Vadai
@ 2011-09-21 15:09         ` Ben Hutchings
  2011-09-22  6:11           ` Amir Vadai
  0 siblings, 1 reply; 8+ messages in thread
From: Ben Hutchings @ 2011-09-21 15:09 UTC (permalink / raw)
  To: amirv; +Cc: Tom Herbert, oren, liranl, netdev, Diego Crupnicoff

On Tue, 2011-09-20 at 09:53 +0300, Amir Vadai wrote:
> This will unset the current CPU of the rflow that belongs to the desired 
> CPU.
> The problem is when the stream resumes and it goes to the wrong RXQ - in 
> our HW, it will be according to RSS, as long as there is no specific 
> flow steering rule for the stream.

Sorry, yes.  Told you I didn't test my patch!

> We need to unset the current CPU of the rflow of the actual RXQ that the 
> packet arrived at:
[...]
> Or even better, not set it in the first place - but I'm not sure I 
> undersdtand the implications on RPS:
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 4b9981c..748acdb 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -2654,7 +2654,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff 
> *skb,
>   {
>          u16 tcpu;
> 
> -   tcpu = rflow->cpu = next_cpu;
> + tcpu = next_cpu;
>          if (tcpu != RPS_NO_CPU) {
>   #ifdef CONFIG_RFS_ACCEL
>                  struct netdev_rx_queue *rxqueue;
> 
> 

But that means we never move the flow to a new CPU in the non-
accelerated case.  So maybe the proper change would be:

--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2652,10 +2652,7 @@ static struct rps_dev_flow *
 set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 	    struct rps_dev_flow *rflow, u16 next_cpu)
 {
-	u16 tcpu;
-
-	tcpu = rflow->cpu = next_cpu;
-	if (tcpu != RPS_NO_CPU) {
+	if (next_cpu != RPS_NO_CPU) {
 #ifdef CONFIG_RFS_ACCEL
 		struct netdev_rx_queue *rxqueue;
 		struct rps_dev_flow_table *flow_table;
@@ -2683,16 +2680,16 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 			goto out;
 		old_rflow = rflow;
 		rflow = &flow_table->flows[flow_id];
-		rflow->cpu = next_cpu;
 		rflow->filter = rc;
 		if (old_rflow->filter == rflow->filter)
 			old_rflow->filter = RPS_NO_FILTER;
 	out:
 #endif
 		rflow->last_qtail =
-			per_cpu(softnet_data, tcpu).input_queue_head;
+			per_cpu(softnet_data, next_cpu).input_queue_head;
 	}
 
+	rflow->cpu = next_cpu;
 	return rflow;
 }
 
--- END ---

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-21 15:09         ` Ben Hutchings
@ 2011-09-22  6:11           ` Amir Vadai
  2011-09-27 23:42             ` Ben Hutchings
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Vadai @ 2011-09-22  6:11 UTC (permalink / raw)
  To: Ben Hutchings; +Cc: Tom Herbert, oren, liranl, netdev, Diego Crupnicoff

Looks good.
and now the code is much clearer

- Amir

On 09/21/2011 06:09 PM, Ben Hutchings wrote:
> On Tue, 2011-09-20 at 09:53 +0300, Amir Vadai wrote:
>> This will unset the current CPU of the rflow that belongs to the desired
>> CPU.
>> The problem is when the stream resumes and it goes to the wrong RXQ - in
>> our HW, it will be according to RSS, as long as there is no specific
>> flow steering rule for the stream.
> Sorry, yes.  Told you I didn't test my patch!
>
>> We need to unset the current CPU of the rflow of the actual RXQ that the
>> packet arrived at:
> [...]
>> Or even better, not set it in the first place - but I'm not sure I
>> undersdtand the implications on RPS:
>>
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index 4b9981c..748acdb 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>> @@ -2654,7 +2654,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff
>> *skb,
>>    {
>>           u16 tcpu;
>>
>> -   tcpu = rflow->cpu = next_cpu;
>> + tcpu = next_cpu;
>>           if (tcpu != RPS_NO_CPU) {
>>    #ifdef CONFIG_RFS_ACCEL
>>                   struct netdev_rx_queue *rxqueue;
>>
>>
> But that means we never move the flow to a new CPU in the non-
> accelerated case.  So maybe the proper change would be:
>
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -2652,10 +2652,7 @@ static struct rps_dev_flow *
>   set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>   	    struct rps_dev_flow *rflow, u16 next_cpu)
>   {
> -	u16 tcpu;
> -
> -	tcpu = rflow->cpu = next_cpu;
> -	if (tcpu != RPS_NO_CPU) {
> +	if (next_cpu != RPS_NO_CPU) {
>   #ifdef CONFIG_RFS_ACCEL
>   		struct netdev_rx_queue *rxqueue;
>   		struct rps_dev_flow_table *flow_table;
> @@ -2683,16 +2680,16 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>   			goto out;
>   		old_rflow = rflow;
>   		rflow =&flow_table->flows[flow_id];
> -		rflow->cpu = next_cpu;
>   		rflow->filter = rc;
>   		if (old_rflow->filter == rflow->filter)
>   			old_rflow->filter = RPS_NO_FILTER;
>   	out:
>   #endif
>   		rflow->last_qtail =
> -			per_cpu(softnet_data, tcpu).input_queue_head;
> +			per_cpu(softnet_data, next_cpu).input_queue_head;
>   	}
>
> +	rflow->cpu = next_cpu;
>   	return rflow;
>   }
>
> --- END ---
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-22  6:11           ` Amir Vadai
@ 2011-09-27 23:42             ` Ben Hutchings
  2011-10-02  7:59               ` Amir Vadai
  0 siblings, 1 reply; 8+ messages in thread
From: Ben Hutchings @ 2011-09-27 23:42 UTC (permalink / raw)
  To: Amir Vadai; +Cc: Tom Herbert, oren, liranl, netdev, Diego Crupnicoff

On Thu, 2011-09-22 at 09:11 +0300, Amir Vadai wrote:
> Looks good.
> and now the code is much clearer

Does that mean that this change *works* for you?

Ben.

[...]
> > But that means we never move the flow to a new CPU in the non-
> > accelerated case.  So maybe the proper change would be:
> >
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -2652,10 +2652,7 @@ static struct rps_dev_flow *
> >   set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> >   	    struct rps_dev_flow *rflow, u16 next_cpu)
> >   {
> > -	u16 tcpu;
> > -
> > -	tcpu = rflow->cpu = next_cpu;
> > -	if (tcpu != RPS_NO_CPU) {
> > +	if (next_cpu != RPS_NO_CPU) {
> >   #ifdef CONFIG_RFS_ACCEL
> >   		struct netdev_rx_queue *rxqueue;
> >   		struct rps_dev_flow_table *flow_table;
> > @@ -2683,16 +2680,16 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> >   			goto out;
> >   		old_rflow = rflow;
> >   		rflow =&flow_table->flows[flow_id];
> > -		rflow->cpu = next_cpu;
> >   		rflow->filter = rc;
> >   		if (old_rflow->filter == rflow->filter)
> >   			old_rflow->filter = RPS_NO_FILTER;
> >   	out:
> >   #endif
> >   		rflow->last_qtail =
> > -			per_cpu(softnet_data, tcpu).input_queue_head;
> > +			per_cpu(softnet_data, next_cpu).input_queue_head;
> >   	}
> >
> > +	rflow->cpu = next_cpu;
> >   	return rflow;
> >   }
> >
> > --- END ---
> >

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RFS issue: no HW filter for paused stream
  2011-09-27 23:42             ` Ben Hutchings
@ 2011-10-02  7:59               ` Amir Vadai
  0 siblings, 0 replies; 8+ messages in thread
From: Amir Vadai @ 2011-10-02  7:59 UTC (permalink / raw)
  To: Ben Hutchings; +Cc: Tom Herbert, oren, liranl, netdev, Diego Crupnicoff

Yes - checked it and it works.

- Amir

On 09/28/2011 02:42 AM, Ben Hutchings wrote:
> On Thu, 2011-09-22 at 09:11 +0300, Amir Vadai wrote:
>> Looks good.
>> and now the code is much clearer
> Does that mean that this change *works* for you?
>
> Ben.
>
> [...]
>>> But that means we never move the flow to a new CPU in the non-
>>> accelerated case.  So maybe the proper change would be:
>>>
>>> --- a/net/core/dev.c
>>> +++ b/net/core/dev.c
>>> @@ -2652,10 +2652,7 @@ static struct rps_dev_flow *
>>>    set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>>>    	    struct rps_dev_flow *rflow, u16 next_cpu)
>>>    {
>>> -	u16 tcpu;
>>> -
>>> -	tcpu = rflow->cpu = next_cpu;
>>> -	if (tcpu != RPS_NO_CPU) {
>>> +	if (next_cpu != RPS_NO_CPU) {
>>>    #ifdef CONFIG_RFS_ACCEL
>>>    		struct netdev_rx_queue *rxqueue;
>>>    		struct rps_dev_flow_table *flow_table;
>>> @@ -2683,16 +2680,16 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>>>    			goto out;
>>>    		old_rflow = rflow;
>>>    		rflow =&flow_table->flows[flow_id];
>>> -		rflow->cpu = next_cpu;
>>>    		rflow->filter = rc;
>>>    		if (old_rflow->filter == rflow->filter)
>>>    			old_rflow->filter = RPS_NO_FILTER;
>>>    	out:
>>>    #endif
>>>    		rflow->last_qtail =
>>> -			per_cpu(softnet_data, tcpu).input_queue_head;
>>> +			per_cpu(softnet_data, next_cpu).input_queue_head;
>>>    	}
>>>
>>> +	rflow->cpu = next_cpu;
>>>    	return rflow;
>>>    }
>>>
>>> --- END ---
>>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-10-02  8:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAP7N4Kfd9TOPr_V6+R9yVtK80eGWbkAvPE8BStG0+WhvJHaLGg@mail.gmail.com>
2011-09-19  6:05 ` RFS issue: no HW filter for paused stream Amir Vadai
2011-09-19 15:13   ` Tom Herbert
2011-09-19 15:52     ` Ben Hutchings
2011-09-20  6:53       ` Amir Vadai
2011-09-21 15:09         ` Ben Hutchings
2011-09-22  6:11           ` Amir Vadai
2011-09-27 23:42             ` Ben Hutchings
2011-10-02  7:59               ` Amir Vadai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.