All of lore.kernel.org
 help / color / mirror / Atom feed
* RPS will assign different smp_processor_id for the same packet?
@ 2011-04-21 15:50 zhou rui
  2011-04-21 16:08 ` Eric Dumazet
  0 siblings, 1 reply; 12+ messages in thread
From: zhou rui @ 2011-04-21 15:50 UTC (permalink / raw)
  To: netdev

kernel 2.6.36.4
CONFIG_RPS=y but not set the cpu mask

/sys/class/net/eth1/queues/rx-0 # cat rps_cpus
00

register a hook func:
  prot_hook.func = packet_rcv;
  prot_hook.type = htons(ETH_P_ALL);
  dev_add_pack(&prot_hook);


replay the same traffic in very slow speed, printk the
smp_processor_id in packet_rcv():
first time:
cpu=4
cpu=3
cpu=6
cpu=7

second time:
cpu=7
cpu=1
cpu=5
cpu=2

is it normal?

thanks
rui

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-21 15:50 RPS will assign different smp_processor_id for the same packet? zhou rui
@ 2011-04-21 16:08 ` Eric Dumazet
  2011-04-21 16:25   ` Eric Dumazet
  2011-04-21 16:27   ` zhou rui
  0 siblings, 2 replies; 12+ messages in thread
From: Eric Dumazet @ 2011-04-21 16:08 UTC (permalink / raw)
  To: zhou rui; +Cc: netdev

Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
> kernel 2.6.36.4
> CONFIG_RPS=y but not set the cpu mask
> 
> /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
> 00
> 
> register a hook func:
>   prot_hook.func = packet_rcv;
>   prot_hook.type = htons(ETH_P_ALL);
>   dev_add_pack(&prot_hook);
> 
> 
> replay the same traffic in very slow speed, printk the
> smp_processor_id in packet_rcv():
> first time:
> cpu=4
> cpu=3
> cpu=6
> cpu=7
> 
> second time:
> cpu=7
> cpu=1
> cpu=5
> cpu=2
> 
> is it normal?

Yes it is.

What would you expect ?



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-21 16:08 ` Eric Dumazet
@ 2011-04-21 16:25   ` Eric Dumazet
  2011-04-21 16:29     ` zhou rui
  2011-04-21 16:27   ` zhou rui
  1 sibling, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2011-04-21 16:25 UTC (permalink / raw)
  To: zhou rui; +Cc: netdev

Le jeudi 21 avril 2011 à 18:08 +0200, Eric Dumazet a écrit :
> Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
> > kernel 2.6.36.4
> > CONFIG_RPS=y but not set the cpu mask
> > 
> > /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
> > 00
> > 
> > register a hook func:
> >   prot_hook.func = packet_rcv;
> >   prot_hook.type = htons(ETH_P_ALL);
> >   dev_add_pack(&prot_hook);
> > 
> > 
> > replay the same traffic in very slow speed, printk the
> > smp_processor_id in packet_rcv():
> > first time:
> > cpu=4
> > cpu=3
> > cpu=6
> > cpu=7
> > 
> > second time:
> > cpu=7
> > cpu=1
> > cpu=5
> > cpu=2
> > 
> > is it normal?
> 
> Yes it is.
> 
> What would you expect ?
> 

If rps_cpus contains only '0' bits, it basically means RPS is not active
for this input queue.

CPU is therefore not changed : The cpu handling NAPI on your network
device directly calls upper linux stack.

Seeing your traces, it also means your device spreads its interrupts on
many different cpus, this might be not optimal.

Check /proc/irq/{irq_number}/smp_affinity, it probably contains "ff"




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-21 16:08 ` Eric Dumazet
  2011-04-21 16:25   ` Eric Dumazet
@ 2011-04-21 16:27   ` zhou rui
  1 sibling, 0 replies; 12+ messages in thread
From: zhou rui @ 2011-04-21 16:27 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

On Friday, April 22, 2011, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
>> kernel 2.6.36.4
>> CONFIG_RPS=y but not set the cpu mask
>>
>> /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
>> 00
>>
>> register a hook func:
>>   prot_hook.func = packet_rcv;
>>   prot_hook.type = htons(ETH_P_ALL);
>>   dev_add_pack(&prot_hook);
>>
>>
>> replay the same traffic in very slow speed, printk the
>> smp_processor_id in packet_rcv():
>> first time:
>> cpu=4
>> cpu=3
>> cpu=6
>> cpu=7
>>
>> second time:
>> cpu=7
>> cpu=1
>> cpu=5
>> cpu=2
>>
>> is it normal?
>
> Yes it is.
>
> What would you expect ?
>
>
>
I want a same CPU for same packet,
If I echo ff >rps_cpu,will I get it?
And the design idea for different CPU in rps?
I understand nic will assign same rxq for packet has same hash

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-21 16:25   ` Eric Dumazet
@ 2011-04-21 16:29     ` zhou rui
  2011-04-23 15:31       ` zhou rui
  0 siblings, 1 reply; 12+ messages in thread
From: zhou rui @ 2011-04-21 16:29 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

On Friday, April 22, 2011, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le jeudi 21 avril 2011 à 18:08 +0200, Eric Dumazet a écrit :
>> Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
>> > kernel 2.6.36.4
>> > CONFIG_RPS=y but not set the cpu mask
>> >
>> > /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
>> > 00
>> >
>> > register a hook func:
>> >   prot_hook.func = packet_rcv;
>> >   prot_hook.type = htons(ETH_P_ALL);
>> >   dev_add_pack(&prot_hook);
>> >
>> >
>> > replay the same traffic in very slow speed, printk the
>> > smp_processor_id in packet_rcv():
>> > first time:
>> > cpu=4
>> > cpu=3
>> > cpu=6
>> > cpu=7
>> >
>> > second time:
>> > cpu=7
>> > cpu=1
>> > cpu=5
>> > cpu=2
>> >
>> > is it normal?
>>
>> Yes it is.
>>
>> What would you expect ?
>>
>
> If rps_cpus contains only '0' bits, it basically means RPS is not active
> for this input queue.
>
> CPU is therefore not changed : The cpu handling NAPI on your network
> device directly calls upper linux stack.
>
> Seeing your traces, it also means your device spreads its interrupts on
> many different cpus, this might be not optimal.
>
> Check /proc/irq/{irq_number}/smp_affinity, it probably contains "ff"
>
>
>
>
Thanks,just saw this email

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-21 16:29     ` zhou rui
@ 2011-04-23 15:31       ` zhou rui
  2011-04-23 19:56         ` Tom Herbert
  0 siblings, 1 reply; 12+ messages in thread
From: zhou rui @ 2011-04-23 15:31 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

one more question is:

in the function "int netif_receive_skb(struct sk_buff *skb)"

cpu = get_rps_cpu(skb->dev, skb, &rflow);
if (cpu >= 0) {
  ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
....

probably the cpu is different from the current processor id?(smp_processor_id)
let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
when this happen, does it mean that cpu1 is handling the softirq but
have to divert the packet to cpu0?(via a new softirq?)

so for one packet it involve 2 softirqs?

possible to get_rps_cpu in interrupt,then let the target cpu do only
one softirq to hanle the packet?

thanks

On Fri, Apr 22, 2011 at 12:29 AM, zhou rui <zhourui.cn@gmail.com> wrote:
> On Friday, April 22, 2011, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>> Le jeudi 21 avril 2011 à 18:08 +0200, Eric Dumazet a écrit :
>>> Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
>>> > kernel 2.6.36.4
>>> > CONFIG_RPS=y but not set the cpu mask
>>> >
>>> > /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
>>> > 00
>>> >
>>> > register a hook func:
>>> >   prot_hook.func = packet_rcv;
>>> >   prot_hook.type = htons(ETH_P_ALL);
>>> >   dev_add_pack(&prot_hook);
>>> >
>>> >
>>> > replay the same traffic in very slow speed, printk the
>>> > smp_processor_id in packet_rcv():
>>> > first time:
>>> > cpu=4
>>> > cpu=3
>>> > cpu=6
>>> > cpu=7
>>> >
>>> > second time:
>>> > cpu=7
>>> > cpu=1
>>> > cpu=5
>>> > cpu=2
>>> >
>>> > is it normal?
>>>
>>> Yes it is.
>>>
>>> What would you expect ?
>>>
>>
>> If rps_cpus contains only '0' bits, it basically means RPS is not active
>> for this input queue.
>>
>> CPU is therefore not changed : The cpu handling NAPI on your network
>> device directly calls upper linux stack.
>>
>> Seeing your traces, it also means your device spreads its interrupts on
>> many different cpus, this might be not optimal.
>>
>> Check /proc/irq/{irq_number}/smp_affinity, it probably contains "ff"
>>
>>
>>
>>
> Thanks,just saw this email
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-23 15:31       ` zhou rui
@ 2011-04-23 19:56         ` Tom Herbert
  2011-04-24  2:00           ` zhou rui
  0 siblings, 1 reply; 12+ messages in thread
From: Tom Herbert @ 2011-04-23 19:56 UTC (permalink / raw)
  To: zhou rui; +Cc: Eric Dumazet, netdev

On Sat, Apr 23, 2011 at 8:31 AM, zhou rui <zhourui.cn@gmail.com> wrote:
> one more question is:
>
> in the function "int netif_receive_skb(struct sk_buff *skb)"
>
> cpu = get_rps_cpu(skb->dev, skb, &rflow);
> if (cpu >= 0) {
>  ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
> ....
>
> probably the cpu is different from the current processor id?(smp_processor_id)
> let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
> when this happen, does it mean that cpu1 is handling the softirq but
> have to divert the packet to cpu0?(via a new softirq?)
>
> so for one packet it involve 2 softirqs?
>
> possible to get_rps_cpu in interrupt,then let the target cpu do only
> one softirq to hanle the packet?
>
Yes, this is what a non-NAPI driver would do.

Tom


> thanks
>
> On Fri, Apr 22, 2011 at 12:29 AM, zhou rui <zhourui.cn@gmail.com> wrote:
>> On Friday, April 22, 2011, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>>> Le jeudi 21 avril 2011 à 18:08 +0200, Eric Dumazet a écrit :
>>>> Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
>>>> > kernel 2.6.36.4
>>>> > CONFIG_RPS=y but not set the cpu mask
>>>> >
>>>> > /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
>>>> > 00
>>>> >
>>>> > register a hook func:
>>>> >   prot_hook.func = packet_rcv;
>>>> >   prot_hook.type = htons(ETH_P_ALL);
>>>> >   dev_add_pack(&prot_hook);
>>>> >
>>>> >
>>>> > replay the same traffic in very slow speed, printk the
>>>> > smp_processor_id in packet_rcv():
>>>> > first time:
>>>> > cpu=4
>>>> > cpu=3
>>>> > cpu=6
>>>> > cpu=7
>>>> >
>>>> > second time:
>>>> > cpu=7
>>>> > cpu=1
>>>> > cpu=5
>>>> > cpu=2
>>>> >
>>>> > is it normal?
>>>>
>>>> Yes it is.
>>>>
>>>> What would you expect ?
>>>>
>>>
>>> If rps_cpus contains only '0' bits, it basically means RPS is not active
>>> for this input queue.
>>>
>>> CPU is therefore not changed : The cpu handling NAPI on your network
>>> device directly calls upper linux stack.
>>>
>>> Seeing your traces, it also means your device spreads its interrupts on
>>> many different cpus, this might be not optimal.
>>>
>>> Check /proc/irq/{irq_number}/smp_affinity, it probably contains "ff"
>>>
>>>
>>>
>>>
>> Thanks,just saw this email
>>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-23 19:56         ` Tom Herbert
@ 2011-04-24  2:00           ` zhou rui
  2011-04-24  8:00             ` Eric Dumazet
  0 siblings, 1 reply; 12+ messages in thread
From: zhou rui @ 2011-04-24  2:00 UTC (permalink / raw)
  To: Tom Herbert; +Cc: Eric Dumazet, netdev

On Sun, Apr 24, 2011 at 3:56 AM, Tom Herbert <therbert@google.com> wrote:
> On Sat, Apr 23, 2011 at 8:31 AM, zhou rui <zhourui.cn@gmail.com> wrote:
>> one more question is:
>>
>> in the function "int netif_receive_skb(struct sk_buff *skb)"
>>
>> cpu = get_rps_cpu(skb->dev, skb, &rflow);
>> if (cpu >= 0) {
>>  ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
>> ....
>>
>> probably the cpu is different from the current processor id?(smp_processor_id)
>> let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
>> when this happen, does it mean that cpu1 is handling the softirq but
>> have to divert the packet to cpu0?(via a new softirq?)
>>
>> so for one packet it involve 2 softirqs?
>>
>> possible to get_rps_cpu in interrupt,then let the target cpu do only
>> one softirq to hanle the packet?
>>
> Yes, this is what a non-NAPI driver would do.
>
> Tom
>
non-NAPI will get_rps_cpu in irq, but why NAPI will get_rps_cpu in
softirq?(if I understand correctly netif_receive_skb executed in
softirq?)
thanks!

>
>> thanks
>>
>> On Fri, Apr 22, 2011 at 12:29 AM, zhou rui <zhourui.cn@gmail.com> wrote:
>>> On Friday, April 22, 2011, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>>>> Le jeudi 21 avril 2011 à 18:08 +0200, Eric Dumazet a écrit :
>>>>> Le jeudi 21 avril 2011 à 23:50 +0800, zhou rui a écrit :
>>>>> > kernel 2.6.36.4
>>>>> > CONFIG_RPS=y but not set the cpu mask
>>>>> >
>>>>> > /sys/class/net/eth1/queues/rx-0 # cat rps_cpus
>>>>> > 00
>>>>> >
>>>>> > register a hook func:
>>>>> >   prot_hook.func = packet_rcv;
>>>>> >   prot_hook.type = htons(ETH_P_ALL);
>>>>> >   dev_add_pack(&prot_hook);
>>>>> >
>>>>> >
>>>>> > replay the same traffic in very slow speed, printk the
>>>>> > smp_processor_id in packet_rcv():
>>>>> > first time:
>>>>> > cpu=4
>>>>> > cpu=3
>>>>> > cpu=6
>>>>> > cpu=7
>>>>> >
>>>>> > second time:
>>>>> > cpu=7
>>>>> > cpu=1
>>>>> > cpu=5
>>>>> > cpu=2
>>>>> >
>>>>> > is it normal?
>>>>>
>>>>> Yes it is.
>>>>>
>>>>> What would you expect ?
>>>>>
>>>>
>>>> If rps_cpus contains only '0' bits, it basically means RPS is not active
>>>> for this input queue.
>>>>
>>>> CPU is therefore not changed : The cpu handling NAPI on your network
>>>> device directly calls upper linux stack.
>>>>
>>>> Seeing your traces, it also means your device spreads its interrupts on
>>>> many different cpus, this might be not optimal.
>>>>
>>>> Check /proc/irq/{irq_number}/smp_affinity, it probably contains "ff"
>>>>
>>>>
>>>>
>>>>
>>> Thanks,just saw this email
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-24  2:00           ` zhou rui
@ 2011-04-24  8:00             ` Eric Dumazet
  2011-04-24  9:36               ` zhou rui
  0 siblings, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2011-04-24  8:00 UTC (permalink / raw)
  To: zhou rui; +Cc: Tom Herbert, netdev

Le dimanche 24 avril 2011 à 10:00 +0800, zhou rui a écrit :
> On Sun, Apr 24, 2011 at 3:56 AM, Tom Herbert <therbert@google.com> wrote:
> > On Sat, Apr 23, 2011 at 8:31 AM, zhou rui <zhourui.cn@gmail.com> wrote:
> >> one more question is:
> >>
> >> in the function "int netif_receive_skb(struct sk_buff *skb)"
> >>
> >> cpu = get_rps_cpu(skb->dev, skb, &rflow);
> >> if (cpu >= 0) {
> >>  ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
> >> ....
> >>
> >> probably the cpu is different from the current processor id?(smp_processor_id)
> >> let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
> >> when this happen, does it mean that cpu1 is handling the softirq but
> >> have to divert the packet to cpu0?(via a new softirq?)
> >>
> >> so for one packet it involve 2 softirqs?
> >>
> >> possible to get_rps_cpu in interrupt,then let the target cpu do only
> >> one softirq to hanle the packet?
> >>
> > Yes, this is what a non-NAPI driver would do.
> >
> > Tom
> >
> non-NAPI will get_rps_cpu in irq, but why NAPI will get_rps_cpu in
> softirq?(if I understand correctly netif_receive_skb executed in
> softirq?)

Thats hard to understand what your problem or question is.

All heavy Receive networking stuff is handled in softirq.

NAPI allows to reduce number of hardware IRQS in stress/load situations,
fetching several frames at once.




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-24  8:00             ` Eric Dumazet
@ 2011-04-24  9:36               ` zhou rui
  2011-04-24 10:53                 ` Eric Dumazet
  2011-04-25 11:02                 ` Neil Horman
  0 siblings, 2 replies; 12+ messages in thread
From: zhou rui @ 2011-04-24  9:36 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Tom Herbert, netdev

On Sun, Apr 24, 2011 at 4:00 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le dimanche 24 avril 2011 à 10:00 +0800, zhou rui a écrit :
>> On Sun, Apr 24, 2011 at 3:56 AM, Tom Herbert <therbert@google.com> wrote:
>> > On Sat, Apr 23, 2011 at 8:31 AM, zhou rui <zhourui.cn@gmail.com> wrote:
>> >> one more question is:
>> >>
>> >> in the function "int netif_receive_skb(struct sk_buff *skb)"
>> >>
>> >> cpu = get_rps_cpu(skb->dev, skb, &rflow);
>> >> if (cpu >= 0) {
>> >>  ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
>> >> ....
>> >>
>> >> probably the cpu is different from the current processor id?(smp_processor_id)
>> >> let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
>> >> when this happen, does it mean that cpu1 is handling the softirq but
>> >> have to divert the packet to cpu0?(via a new softirq?)
>> >>
>> >> so for one packet it involve 2 softirqs?
>> >>
>> >> possible to get_rps_cpu in interrupt,then let the target cpu do only
>> >> one softirq to hanle the packet?
>> >>
>> > Yes, this is what a non-NAPI driver would do.
>> >
>> > Tom
>> >
>> non-NAPI will get_rps_cpu in irq, but why NAPI will get_rps_cpu in
>> softirq?(if I understand correctly netif_receive_skb executed in
>> softirq?)
>
> Thats hard to understand what your problem or question is.
>
> All heavy Receive networking stuff is handled in softirq.
>
> NAPI allows to reduce number of hardware IRQS in stress/load situations,
> fetching several frames at once.
>
>
>
>

my understanding:

non-NAPI scenario:

netif_rx( in irq, get_rps_cpu,enqueue_to_backlog to deliver packet to
cpu queue) ------>net_rx_action(in softirq,deque and process packet)


NAPI:

what does RPS do?(in
irq)------------------------>net_rx_action(softirq)----->netif_receive_skb(get_rps_cpu,enque
packet)

so my question is:
for NAPI, get_rps_cpu will be done in softirq?

if the above situation is true,will this happen?
packet_for_cpu_1 --> cpu0(netif_receive_skb,in softirq) --->delivered
to cpu1(softirq)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-24  9:36               ` zhou rui
@ 2011-04-24 10:53                 ` Eric Dumazet
  2011-04-25 11:02                 ` Neil Horman
  1 sibling, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2011-04-24 10:53 UTC (permalink / raw)
  To: zhou rui; +Cc: Tom Herbert, netdev

Le dimanche 24 avril 2011 à 17:36 +0800, zhou rui a écrit :
> >
> 
> my understanding:
> 
> non-NAPI scenario:
> 
> netif_rx( in irq, get_rps_cpu,enqueue_to_backlog to deliver packet to
> cpu queue) ------>net_rx_action(in softirq,deque and process packet)
> 
> 
> NAPI:
> 
> what does RPS do?(in
> irq)------------------------>net_rx_action(softirq)----->netif_receive_skb(get_rps_cpu,enque
> packet)
> 
> so my question is:
> for NAPI, get_rps_cpu will be done in softirq?
> 


> if the above situation is true,will this happen?
> packet_for_cpu_1 --> cpu0(netif_receive_skb,in softirq) --->delivered
> to cpu1(softirq)

NAPI is done under softirq, yes.

netif_receive_skb()
	cpu = get_rps_cpu()
	if (othercpu(cpu))
		enqueue_to_backlog(cpu)
	else
		__netif_receive_skb(skb);


enqueue_to_backlog() triggers an IPI (hard IRQ) to other cpu1,
to queue an NAPI context. (and triggers softirq)




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RPS will assign different smp_processor_id for the same packet?
  2011-04-24  9:36               ` zhou rui
  2011-04-24 10:53                 ` Eric Dumazet
@ 2011-04-25 11:02                 ` Neil Horman
  1 sibling, 0 replies; 12+ messages in thread
From: Neil Horman @ 2011-04-25 11:02 UTC (permalink / raw)
  To: zhou rui; +Cc: Eric Dumazet, Tom Herbert, netdev

On Sun, Apr 24, 2011 at 05:36:51PM +0800, zhou rui wrote:
> On Sun, Apr 24, 2011 at 4:00 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > Le dimanche 24 avril 2011 à 10:00 +0800, zhou rui a écrit :
> >> On Sun, Apr 24, 2011 at 3:56 AM, Tom Herbert <therbert@google.com> wrote:
> >> > On Sat, Apr 23, 2011 at 8:31 AM, zhou rui <zhourui.cn@gmail.com> wrote:
> >> >> one more question is:
> >> >>
> >> >> in the function "int netif_receive_skb(struct sk_buff *skb)"
> >> >>
> >> >> cpu = get_rps_cpu(skb->dev, skb, &rflow);
> >> >> if (cpu >= 0) {
> >> >>  ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
> >> >> ....
> >> >>
> >> >> probably the cpu is different from the current processor id?(smp_processor_id)
> >> >> let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
> >> >> when this happen, does it mean that cpu1 is handling the softirq but
> >> >> have to divert the packet to cpu0?(via a new softirq?)
> >> >>
> >> >> so for one packet it involve 2 softirqs?
> >> >>
> >> >> possible to get_rps_cpu in interrupt,then let the target cpu do only
> >> >> one softirq to hanle the packet?
> >> >>
> >> > Yes, this is what a non-NAPI driver would do.
> >> >
> >> > Tom
> >> >
> >> non-NAPI will get_rps_cpu in irq, but why NAPI will get_rps_cpu in
> >> softirq?(if I understand correctly netif_receive_skb executed in
> >> softirq?)
> >
> > Thats hard to understand what your problem or question is.
> >
> > All heavy Receive networking stuff is handled in softirq.
> >
> > NAPI allows to reduce number of hardware IRQS in stress/load situations,
> > fetching several frames at once.
> >
> >
> >
> >
> 
> my understanding:
> 
> non-NAPI scenario:
> 
> netif_rx( in irq, get_rps_cpu,enqueue_to_backlog to deliver packet to
> cpu queue) ------>net_rx_action(in softirq,deque and process packet)
> 
> 
> NAPI:
> 
> what does RPS do?(in
> irq)------------------------>net_rx_action(softirq)----->netif_receive_skb(get_rps_cpu,enque
> packet)
> 
> so my question is:
> for NAPI, get_rps_cpu will be done in softirq?
> 
> if the above situation is true,will this happen?
> packet_for_cpu_1 --> cpu0(netif_receive_skb,in softirq) --->delivered
> to cpu1(softirq)
As Eric noted, NAPI enabled drivers using RPS will have a flow like he
described:
irq
 napi_schedule(driver)
  driver poll
   netif_receive_skb
    enqueue_to_backlog
     ipi
      napi_schedule (backlog dev)
       backlog dev poll
        netif_receive_skb

drivers using netif_rx do the same thing, the only difference is that they
schedule the local backlog device for a napi poll rather than their own device.
This is done because the ipi to the remote cpu is issued from within the napi
softirq, so we need to kick the local cpu to run a napi poll cycle after such a
driver receives a frame.

Neil

> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-04-25 11:02 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-04-21 15:50 RPS will assign different smp_processor_id for the same packet? zhou rui
2011-04-21 16:08 ` Eric Dumazet
2011-04-21 16:25   ` Eric Dumazet
2011-04-21 16:29     ` zhou rui
2011-04-23 15:31       ` zhou rui
2011-04-23 19:56         ` Tom Herbert
2011-04-24  2:00           ` zhou rui
2011-04-24  8:00             ` Eric Dumazet
2011-04-24  9:36               ` zhou rui
2011-04-24 10:53                 ` Eric Dumazet
2011-04-25 11:02                 ` Neil Horman
2011-04-21 16:27   ` zhou rui

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.