linux-can.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] can: isotp: omit unintended hrtimer restart on socket release
@ 2021-06-18 17:37 Oliver Hartkopp
  2021-06-19 21:36 ` Marc Kleine-Budde
  2021-08-28 13:20 ` AW: " Sven Schuchmann
  0 siblings, 2 replies; 10+ messages in thread
From: Oliver Hartkopp @ 2021-06-18 17:37 UTC (permalink / raw)
  To: linux-can; +Cc: Oliver Hartkopp

When closing the isotp socket the potentially running hrtimers are
canceled before removing the subscription for CAN idendifiers via
can_rx_unregister().

This may lead to an unintended (re)start of a hrtimer in isotp_rcv_cf()
and isotp_rcv_fc() in the case that a CAN frame is received by
isotp_rcv() while the subscription removal is processed.

However, isotp_rcv() is called under RCU protection, so after calling
can_rx_unregister, we may call synchronize_rcu in order to wait for any
RCU read-side critical sections to finish. This prevents the reception
of CAN frames after hrtimer_cancel() and therefore the unintended
(re)start of the hrtimers.

Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
---
 net/can/isotp.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/net/can/isotp.c b/net/can/isotp.c
index be6183f8ca11..234cc4ad179a 100644
--- a/net/can/isotp.c
+++ b/net/can/isotp.c
@@ -1026,13 +1026,10 @@ static int isotp_release(struct socket *sock)
 	list_del(&so->notifier);
 	spin_unlock(&isotp_notifier_lock);
 
 	lock_sock(sk);
 
-	hrtimer_cancel(&so->txtimer);
-	hrtimer_cancel(&so->rxtimer);
-
 	/* remove current filters & unregister */
 	if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST))) {
 		if (so->ifindex) {
 			struct net_device *dev;
 
@@ -1040,14 +1037,18 @@ static int isotp_release(struct socket *sock)
 			if (dev) {
 				can_rx_unregister(net, dev, so->rxid,
 						  SINGLE_MASK(so->rxid),
 						  isotp_rcv, sk);
 				dev_put(dev);
+				synchronize_rcu();
 			}
 		}
 	}
 
+	hrtimer_cancel(&so->txtimer);
+	hrtimer_cancel(&so->rxtimer);
+
 	so->ifindex = 0;
 	so->bound = 0;
 
 	sock_orphan(sk);
 	sock->sk = NULL;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-06-18 17:37 [PATCH] can: isotp: omit unintended hrtimer restart on socket release Oliver Hartkopp
@ 2021-06-19 21:36 ` Marc Kleine-Budde
  2021-08-28 13:20 ` AW: " Sven Schuchmann
  1 sibling, 0 replies; 10+ messages in thread
From: Marc Kleine-Budde @ 2021-06-19 21:36 UTC (permalink / raw)
  To: Oliver Hartkopp; +Cc: linux-can

[-- Attachment #1: Type: text/plain, Size: 1122 bytes --]

On 18.06.2021 19:37:13, Oliver Hartkopp wrote:
> When closing the isotp socket the potentially running hrtimers are
> canceled before removing the subscription for CAN idendifiers via
> can_rx_unregister().
> 
> This may lead to an unintended (re)start of a hrtimer in isotp_rcv_cf()
> and isotp_rcv_fc() in the case that a CAN frame is received by
> isotp_rcv() while the subscription removal is processed.
> 
> However, isotp_rcv() is called under RCU protection, so after calling
> can_rx_unregister, we may call synchronize_rcu in order to wait for any
> RCU read-side critical sections to finish. This prevents the reception
> of CAN frames after hrtimer_cancel() and therefore the unintended
> (re)start of the hrtimers.
> 
> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>

Added to linux-can/testing

Thanks,
Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-06-18 17:37 [PATCH] can: isotp: omit unintended hrtimer restart on socket release Oliver Hartkopp
  2021-06-19 21:36 ` Marc Kleine-Budde
@ 2021-08-28 13:20 ` Sven Schuchmann
  2021-08-29 11:17   ` Oliver Hartkopp
  1 sibling, 1 reply; 10+ messages in thread
From: Sven Schuchmann @ 2021-08-28 13:20 UTC (permalink / raw)
  To: Oliver Hartkopp, linux-can; +Cc: Marc Kleine-Budde

Hello,
sorry, I'm late for the party :-) 
But I found that this patch decreases the performance of ISO-TP Stack.

I have created two testscripts where one plays the server and the 
other one is running a test and measuring the time how long
it takes to transfer an ISO-TP Frame with 1000 Bytes.

Without this patch it takes about 35ms to transfer the frame,
with this patch it takes about 145ms over vcan0.

Anyone an idea on this?

Sven


bring up a vcan0 interface with:
sudo modprobe vcan
sudo ip link add dev vcan0 type vcan
sudo ifconfig vcan0 up

here are the scripts:
--- isotp_server.sh ---
#!/bin/bash
iface=vcan0
echo "Wait for Messages on $iface"
while true; do
    exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
    rxpid=$!
    wait $rxpid
    output=$(cat <&3)
    echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
done

--- isotp_test.sh ---
#!/bin/bash
databyte=01
numbytes=1000
iface=vcan0
echo "Test iso-tp with $numbytes byte frames on $iface (data:$databyte)"
message="01 " # invalid service 01
for (( i=1; i<=$numbytes; i++ ))
do
   message="$message$databyte "
done
i=1
timemin=10000
timemax=0
timesum=0
while true; do
    ts=$(date +%s%N)
    exec 3< <(isotprecv -s 714 -d 77E -b 5 $iface)
    rxpid=$!
    echo $message | isotpsend -s 714 -d 77E -p AA:AA -L 16:8:0 $iface
    wait $rxpid
    output=$(cat <&3)
    timediff=$((($(date +%s%N) - $ts)/1000000))
    if [ "7F 01 11 " != "$output" ]; then
        echo "ERROR: $i >$output<"
        break
    fi
    if [ $timediff -gt $timemax ]; then
        timemax=$timediff
    fi
    if [ $timediff -lt $timemin ]; then
        timemin=$timediff
    fi
    ((timesum=timesum+timediff))
    timeavg=$(echo "$timesum / $i" | bc -l)
    printf "%5d / curr:%5d / min:%5d / max:%5d / avg:%7.1f\n" $i $timediff $timemin $timemax $timeavg
    ((i=i+1))
done
------

Sven Schuchmann
Schleißheimer Soft- und
Hardwareentwicklung GmbH
Am Kalkofen 10
61206 Nieder-Wöllstadt
GERMANY
Phone: +49 6034 9148 711
Fax: +49 6034 9148 91
Email: schuchmann@schleissheimer.de

Court of Registration: Amtsgericht Friedberg
Registration Number: HRB 1581
Management Board:
Hans-Joachim Schleißheimer
Christine Schleißheimer

> -----Ursprüngliche Nachricht-----
> Von: Oliver Hartkopp <socketcan@hartkopp.net>
> Gesendet: Freitag, 18. Juni 2021 19:37
> An: linux-can@vger.kernel.org
> Cc: Oliver Hartkopp <socketcan@hartkopp.net>
> Betreff: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
> 
> When closing the isotp socket the potentially running hrtimers are
> canceled before removing the subscription for CAN idendifiers via
> can_rx_unregister().
> 
> This may lead to an unintended (re)start of a hrtimer in isotp_rcv_cf()
> and isotp_rcv_fc() in the case that a CAN frame is received by
> isotp_rcv() while the subscription removal is processed.
> 
> However, isotp_rcv() is called under RCU protection, so after calling
> can_rx_unregister, we may call synchronize_rcu in order to wait for any
> RCU read-side critical sections to finish. This prevents the reception
> of CAN frames after hrtimer_cancel() and therefore the unintended
> (re)start of the hrtimers.
> 
> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
> ---
>  net/can/isotp.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/net/can/isotp.c b/net/can/isotp.c
> index be6183f8ca11..234cc4ad179a 100644
> --- a/net/can/isotp.c
> +++ b/net/can/isotp.c
> @@ -1026,13 +1026,10 @@ static int isotp_release(struct socket *sock)
>  	list_del(&so->notifier);
>  	spin_unlock(&isotp_notifier_lock);
> 
>  	lock_sock(sk);
> 
> -	hrtimer_cancel(&so->txtimer);
> -	hrtimer_cancel(&so->rxtimer);
> -
>  	/* remove current filters & unregister */
>  	if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST))) {
>  		if (so->ifindex) {
>  			struct net_device *dev;
> 
> @@ -1040,14 +1037,18 @@ static int isotp_release(struct socket *sock)
>  			if (dev) {
>  				can_rx_unregister(net, dev, so->rxid,
>  						  SINGLE_MASK(so->rxid),
>  						  isotp_rcv, sk);
>  				dev_put(dev);
> +				synchronize_rcu();
>  			}
>  		}
>  	}
> 
> +	hrtimer_cancel(&so->txtimer);
> +	hrtimer_cancel(&so->rxtimer);
> +
>  	so->ifindex = 0;
>  	so->bound = 0;
> 
>  	sock_orphan(sk);
>  	sock->sk = NULL;
> --
> 2.30.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-28 13:20 ` AW: " Sven Schuchmann
@ 2021-08-29 11:17   ` Oliver Hartkopp
  2021-08-29 18:28     ` AW: " Sven Schuchmann
  0 siblings, 1 reply; 10+ messages in thread
From: Oliver Hartkopp @ 2021-08-29 11:17 UTC (permalink / raw)
  To: Sven Schuchmann, linux-can; +Cc: Marc Kleine-Budde

Hello Sven,

On 28.08.21 15:20, Sven Schuchmann wrote:

> sorry, I'm late for the party :-)

NP ;-)

> But I found that this patch decreases the performance of ISO-TP Stack.

AFAICS the performance (aka throughput) of the ISO-TP stack is not 
touched but the grace period when closing an ISO-TP socket is increased.

> I have created two testscripts where one plays the server and the
> other one is running a test and measuring the time how long
> it takes to transfer an ISO-TP Frame with 1000 Bytes.
> 
> Without this patch it takes about 35ms to transfer the frame,
> with this patch it takes about 145ms over vcan0.
> 
> Anyone an idea on this?

Yes. We now syncronize the removal of data structures to prevent a 
use-after-free issue at socket close time.

The synchronize_rcu() call does this job at specific times which leads 
to this extended time the close() syscall needs to perform.

> bring up a vcan0 interface with:
> sudo modprobe vcan
> sudo ip link add dev vcan0 type vcan
> sudo ifconfig vcan0 up
> 
> here are the scripts:
> --- isotp_server.sh ---
> #!/bin/bash
> iface=vcan0
> echo "Wait for Messages on $iface"
> while true; do
>      exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
>      rxpid=$!
>      wait $rxpid
>      output=$(cat <&3)
>      echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
> done

IMO the issue arises with the use of isotpsend and isotprecv.
These tools are intended to get a hands-on impression how the isotp 
stack works.

This kind of use in a script leads to the creation and (now delayed) 
*removal* of isotp sockets for *each* single PDU transfer.

The better approach would be to write a C program that creates ONE 
socket and simply read() from that socket and write() to it.

This should boost your performance even more.

Is the performance a real requirement for your use-case or is this 
decreased socket close rate a finding which does not really affect your 
work?

Best regards,
Oliver


^ permalink raw reply	[flat|nested] 10+ messages in thread

* AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-29 11:17   ` Oliver Hartkopp
@ 2021-08-29 18:28     ` Sven Schuchmann
  2021-08-29 20:14       ` Oliver Hartkopp
  0 siblings, 1 reply; 10+ messages in thread
From: Sven Schuchmann @ 2021-08-29 18:28 UTC (permalink / raw)
  To: Oliver Hartkopp, linux-can; +Cc: Marc Kleine-Budde

Hello Oliver,
 
> > But I found that this patch decreases the performance of ISO-TP Stack.
> 
> AFAICS the performance (aka throughput) of the ISO-TP stack is not
> touched but the grace period when closing an ISO-TP socket is increased.
> 
> > I have created two testscripts where one plays the server and the
> > other one is running a test and measuring the time how long
> > it takes to transfer an ISO-TP Frame with 1000 Bytes.
> >
> > Without this patch it takes about 35ms to transfer the frame,
> > with this patch it takes about 145ms over vcan0.
> >
> > Anyone an idea on this?
> 
> Yes. We now syncronize the removal of data structures to prevent a
> use-after-free issue at socket close time.
> 
> The synchronize_rcu() call does this job at specific times which leads
> to this extended time the close() syscall needs to perform.

understood

> 
> > bring up a vcan0 interface with:
> > sudo modprobe vcan
> > sudo ip link add dev vcan0 type vcan
> > sudo ifconfig vcan0 up
> >
> > here are the scripts:
> > --- isotp_server.sh ---
> > #!/bin/bash
> > iface=vcan0
> > echo "Wait for Messages on $iface"
> > while true; do
> >      exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
> >      rxpid=$!
> >      wait $rxpid
> >      output=$(cat <&3)
> >      echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
> > done
> 
> IMO the issue arises with the use of isotpsend and isotprecv.
> These tools are intended to get a hands-on impression how the isotp
> stack works.
> 
> This kind of use in a script leads to the creation and (now delayed)
> *removal* of isotp sockets for *each* single PDU transfer.

Maybe I am wrong but I see something different.
e.g. without this patch:
 (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
 (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
 (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01

and with this patch:
 (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
 (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
 (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01

So within one PDU transfer the first Consecutive Frame after
a Flow Control is taking about 10ms longer (the consecutive
frames are sent by ISO-TP Stack here, Tested against a "real" ECU.)
So if I transfer a lot of data within one PDU,
the more Flow Controls I have and the more "delays" after each FC,
and this increases the time for the whole PDU.)

> 
> The better approach would be to write a C program that creates ONE
> socket and simply read() from that socket and write() to it.
> 
> This should boost your performance even more.
> 
Sure, I do have this. These two scripts are only lets say a "reproducer".


> Is the performance a real requirement for your use-case or is this
> decreased socket close rate a finding which does not really affect your
> work?

We have a application here which flashes a ECU over CAN according to VW80126.
So transferring the data as quick as possible to the ECU is a use-case.

Sven


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-29 18:28     ` AW: " Sven Schuchmann
@ 2021-08-29 20:14       ` Oliver Hartkopp
  2021-08-29 20:28         ` Marc Kleine-Budde
  2021-08-29 21:09         ` AW: AW: AW: " Sven Schuchmann
  0 siblings, 2 replies; 10+ messages in thread
From: Oliver Hartkopp @ 2021-08-29 20:14 UTC (permalink / raw)
  To: Sven Schuchmann, linux-can; +Cc: Marc Kleine-Budde



On 29.08.21 20:28, Sven Schuchmann wrote:
> Hello Oliver,
>   
>>> But I found that this patch decreases the performance of ISO-TP Stack.
>>
>> AFAICS the performance (aka throughput) of the ISO-TP stack is not
>> touched but the grace period when closing an ISO-TP socket is increased.
>>
>>> I have created two testscripts where one plays the server and the
>>> other one is running a test and measuring the time how long
>>> it takes to transfer an ISO-TP Frame with 1000 Bytes.
>>>
>>> Without this patch it takes about 35ms to transfer the frame,
>>> with this patch it takes about 145ms over vcan0.
>>>
>>> Anyone an idea on this?
>>
>> Yes. We now syncronize the removal of data structures to prevent a
>> use-after-free issue at socket close time.
>>
>> The synchronize_rcu() call does this job at specific times which leads
>> to this extended time the close() syscall needs to perform.
> 
> understood
> 
>>
>>> bring up a vcan0 interface with:
>>> sudo modprobe vcan
>>> sudo ip link add dev vcan0 type vcan
>>> sudo ifconfig vcan0 up
>>>
>>> here are the scripts:
>>> --- isotp_server.sh ---
>>> #!/bin/bash
>>> iface=vcan0
>>> echo "Wait for Messages on $iface"
>>> while true; do
>>>       exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
>>>       rxpid=$!
>>>       wait $rxpid
>>>       output=$(cat <&3)
>>>       echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
>>> done
>>
>> IMO the issue arises with the use of isotpsend and isotprecv.
>> These tools are intended to get a hands-on impression how the isotp
>> stack works.
>>
>> This kind of use in a script leads to the creation and (now delayed)
>> *removal* of isotp sockets for *each* single PDU transfer.
> 
> Maybe I am wrong but I see something different.
> e.g. without this patch:
>   (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
>   (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
>   (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> 
> and with this patch:
>   (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
>   (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
>   (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> 

I'm running a 5.14.0-rc7-00011-g6e764bcd1cf7 kernel here and see this:

  (000.000001)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
  (000.000015)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
  (000.000005)  vcan0  714   [8]  2C 01 01 01 01 01 01 01

Test iso-tp with 1000 byte frames on vcan0 (data:01)
     1 / curr:   40 / min:   40 / max:   40 / avg:   40.0
     2 / curr:   30 / min:   30 / max:   40 / avg:   35.0
     3 / curr:   35 / min:   30 / max:   40 / avg:   35.0
     4 / curr:   52 / min:   30 / max:   52 / avg:   39.2
     5 / curr:   40 / min:   30 / max:   52 / avg:   39.4
(..)

when running your scripts from the initial post.

Is you canfd0 interface a real hardware?

Best regards,
Oliver

> So within one PDU transfer the first Consecutive Frame after
> a Flow Control is taking about 10ms longer (the consecutive
> frames are sent by ISO-TP Stack here, Tested against a "real" ECU.)
> So if I transfer a lot of data within one PDU,
> the more Flow Controls I have and the more "delays" after each FC,
> and this increases the time for the whole PDU.)
> 
>>
>> The better approach would be to write a C program that creates ONE
>> socket and simply read() from that socket and write() to it.
>>
>> This should boost your performance even more.
>>
> Sure, I do have this. These two scripts are only lets say a "reproducer".
> 
> 
>> Is the performance a real requirement for your use-case or is this
>> decreased socket close rate a finding which does not really affect your
>> work?
> 
> We have a application here which flashes a ECU over CAN according to VW80126.
> So transferring the data as quick as possible to the ECU is a use-case.
> 
> Sven
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-29 20:14       ` Oliver Hartkopp
@ 2021-08-29 20:28         ` Marc Kleine-Budde
  2021-08-29 21:09         ` AW: AW: AW: " Sven Schuchmann
  1 sibling, 0 replies; 10+ messages in thread
From: Marc Kleine-Budde @ 2021-08-29 20:28 UTC (permalink / raw)
  To: Oliver Hartkopp; +Cc: Sven Schuchmann, linux-can

[-- Attachment #1: Type: text/plain, Size: 2109 bytes --]

On 29.08.2021 22:14:41, Oliver Hartkopp wrote:
> > > IMO the issue arises with the use of isotpsend and isotprecv.
> > > These tools are intended to get a hands-on impression how the isotp
> > > stack works.
> > > 
> > > This kind of use in a script leads to the creation and (now delayed)
> > > *removal* of isotp sockets for *each* single PDU transfer.
> > 
> > Maybe I am wrong but I see something different.
> > e.g. without this patch:
> >   (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
> >   (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
> >   (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> > 
> > and with this patch:
> >   (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
> >   (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
> >   (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> > 
> 
> I'm running a 5.14.0-rc7-00011-g6e764bcd1cf7 kernel here and see this:
> 
>  (000.000001)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
>  (000.000015)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>  (000.000005)  vcan0  714   [8]  2C 01 01 01 01 01 01 01
> 
> Test iso-tp with 1000 byte frames on vcan0 (data:01)
>     1 / curr:   40 / min:   40 / max:   40 / avg:   40.0
>     2 / curr:   30 / min:   30 / max:   40 / avg:   35.0
>     3 / curr:   35 / min:   30 / max:   40 / avg:   35.0
>     4 / curr:   52 / min:   30 / max:   52 / avg:   39.2
>     5 / curr:   40 / min:   30 / max:   52 / avg:   39.4
> (..)
> 
> when running your scripts from the initial post.
> 
> Is you canfd0 interface a real hardware?

I can reproduce the issue here, on an embedded system with a real
hardware CAN interface.

In j1939 we make use of rcu so that we don't need the synchronize_net().
Is it possible to use this is ISOTP, too?

regards,
Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* AW: AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-29 20:14       ` Oliver Hartkopp
  2021-08-29 20:28         ` Marc Kleine-Budde
@ 2021-08-29 21:09         ` Sven Schuchmann
  2021-08-30  7:55           ` Sven Schuchmann
  1 sibling, 1 reply; 10+ messages in thread
From: Sven Schuchmann @ 2021-08-29 21:09 UTC (permalink / raw)
  To: Oliver Hartkopp, linux-can; +Cc: Marc Kleine-Budde

Hello Oliver,

> >>> bring up a vcan0 interface with:
> >>> sudo modprobe vcan
> >>> sudo ip link add dev vcan0 type vcan
> >>> sudo ifconfig vcan0 up
> >>>
> >>> here are the scripts:
> >>> --- isotp_server.sh ---
> >>> #!/bin/bash
> >>> iface=vcan0
> >>> echo "Wait for Messages on $iface"
> >>> while true; do
> >>>       exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
> >>>       rxpid=$!
> >>>       wait $rxpid
> >>>       output=$(cat <&3)
> >>>       echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
> >>> done
> >>
> >> IMO the issue arises with the use of isotpsend and isotprecv.
> >> These tools are intended to get a hands-on impression how the isotp
> >> stack works.
> >>
> >> This kind of use in a script leads to the creation and (now delayed)
> >> *removal* of isotp sockets for *each* single PDU transfer.
> >
> > Maybe I am wrong but I see something different.
> > e.g. without this patch:
> >   (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
> >   (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
> >   (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> >
> > and with this patch:
> >   (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
> >   (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
> >   (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> >
> 
> I'm running a 5.14.0-rc7-00011-g6e764bcd1cf7 kernel here and see this:
> 
>   (000.000001)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
>   (000.000015)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>   (000.000005)  vcan0  714   [8]  2C 01 01 01 01 01 01 01
> 
> Test iso-tp with 1000 byte frames on vcan0 (data:01)
>      1 / curr:   40 / min:   40 / max:   40 / avg:   40.0
>      2 / curr:   30 / min:   30 / max:   40 / avg:   35.0
>      3 / curr:   35 / min:   30 / max:   40 / avg:   35.0
>      4 / curr:   52 / min:   30 / max:   52 / avg:   39.2
>      5 / curr:   40 / min:   30 / max:   52 / avg:   39.4
> (..)
> 
> when running your scripts from the initial post.
> 
> Is you canfd0 interface a real hardware?
> 
Yes, the canfd0 interface is a real hardware (a MCP2518FD connected
to a RaspberryPi-4-Compute-Module).

I am running a 5.10.60 kernel. 
With using the testscripts I provided I can see this:
with the patch:

    1 / curr:  147 / min:  147 / max:  147 / avg:  147.0
    2 / curr:  107 / min:  107 / max:  147 / avg:  127.0
    3 / curr:  118 / min:  107 / max:  147 / avg:  124.0
    4 / curr:  138 / min:  107 / max:  147 / avg:  127.5
    5 / curr:  115 / min:  107 / max:  147 / avg:  125.0
    6 / curr:  158 / min:  107 / max:  158 / avg:  130.5
    7 / curr:  140 / min:  107 / max:  158 / avg:  131.9

and without the patch:

   39 / curr:   42 / min:   28 / max:   45 / avg:   30.9
   40 / curr:   41 / min:   28 / max:   45 / avg:   31.2
   41 / curr:   28 / min:   28 / max:   45 / avg:   31.1
   42 / curr:   29 / min:   28 / max:   45 / avg:   31.0
   43 / curr:   31 / min:   28 / max:   45 / avg:   31.0
   44 / curr:   28 / min:   28 / max:   45 / avg:   31.0

but if I compare the candumps I can see:
with the patch:

 (000.000008)  vcan0  714   [8]  2F 01 01 01 01 01 01 01
 (000.000209)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
 (000.000061)  vcan0  714   [8]  20 01 01 01 01 01 01 01

and without:

 (000.000004)  vcan0  714   [8]  2F 01 01 01 01 01 01 01
 (000.000069)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
 (000.000017)  vcan0  714   [8]  20 01 01 01 01 01 01 01

sorry, I missed that: Over here the delay seems to be in
the FC and not in the CF after the FC. This is what is
different compared to the real hardware.

So to me it seems that the rcu implementation
has changed on the way from 5.10 to 5.14?

Sven





^ permalink raw reply	[flat|nested] 10+ messages in thread

* AW: AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-29 21:09         ` AW: AW: AW: " Sven Schuchmann
@ 2021-08-30  7:55           ` Sven Schuchmann
  2021-08-30 12:44             ` Oliver Hartkopp
  0 siblings, 1 reply; 10+ messages in thread
From: Sven Schuchmann @ 2021-08-30  7:55 UTC (permalink / raw)
  To: Sven Schuchmann, Oliver Hartkopp, linux-can; +Cc: Marc Kleine-Budde

Hello Oliver,

> > >>> bring up a vcan0 interface with:
> > >>> sudo modprobe vcan
> > >>> sudo ip link add dev vcan0 type vcan
> > >>> sudo ifconfig vcan0 up
> > >>>
> > >>> here are the scripts:
> > >>> --- isotp_server.sh ---
> > >>> #!/bin/bash
> > >>> iface=vcan0
> > >>> echo "Wait for Messages on $iface"
> > >>> while true; do
> > >>>       exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
> > >>>       rxpid=$!
> > >>>       wait $rxpid
> > >>>       output=$(cat <&3)
> > >>>       echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
> > >>> done
> > >>
> > >> IMO the issue arises with the use of isotpsend and isotprecv.
> > >> These tools are intended to get a hands-on impression how the isotp
> > >> stack works.
> > >>
> > >> This kind of use in a script leads to the creation and (now delayed)
> > >> *removal* of isotp sockets for *each* single PDU transfer.
> > >
> > > Maybe I am wrong but I see something different.
> > > e.g. without this patch:
> > >   (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
> > >   (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
> > >   (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> > >
> > > and with this patch:
> > >   (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
> > >   (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
> > >   (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> > >
> >
> > I'm running a 5.14.0-rc7-00011-g6e764bcd1cf7 kernel here and see this:
> >
> >   (000.000001)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
> >   (000.000015)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
> >   (000.000005)  vcan0  714   [8]  2C 01 01 01 01 01 01 01
> >
> > Test iso-tp with 1000 byte frames on vcan0 (data:01)
> >      1 / curr:   40 / min:   40 / max:   40 / avg:   40.0
> >      2 / curr:   30 / min:   30 / max:   40 / avg:   35.0
> >      3 / curr:   35 / min:   30 / max:   40 / avg:   35.0
> >      4 / curr:   52 / min:   30 / max:   52 / avg:   39.2
> >      5 / curr:   40 / min:   30 / max:   52 / avg:   39.4
> > (..)
> >
> > when running your scripts from the initial post.
> >
> > Is you canfd0 interface a real hardware?
> >
> Yes, the canfd0 interface is a real hardware (a MCP2518FD connected
> to a RaspberryPi-4-Compute-Module).
> 
> I am running a 5.10.60 kernel.
> With using the testscripts I provided I can see this:
> with the patch:
> 
>     1 / curr:  147 / min:  147 / max:  147 / avg:  147.0
>     2 / curr:  107 / min:  107 / max:  147 / avg:  127.0
>     3 / curr:  118 / min:  107 / max:  147 / avg:  124.0
>     4 / curr:  138 / min:  107 / max:  147 / avg:  127.5
>     5 / curr:  115 / min:  107 / max:  147 / avg:  125.0
>     6 / curr:  158 / min:  107 / max:  158 / avg:  130.5
>     7 / curr:  140 / min:  107 / max:  158 / avg:  131.9
> 
> and without the patch:
> 
>    39 / curr:   42 / min:   28 / max:   45 / avg:   30.9
>    40 / curr:   41 / min:   28 / max:   45 / avg:   31.2
>    41 / curr:   28 / min:   28 / max:   45 / avg:   31.1
>    42 / curr:   29 / min:   28 / max:   45 / avg:   31.0
>    43 / curr:   31 / min:   28 / max:   45 / avg:   31.0
>    44 / curr:   28 / min:   28 / max:   45 / avg:   31.0
> 
> but if I compare the candumps I can see:
> with the patch:
> 
>  (000.000008)  vcan0  714   [8]  2F 01 01 01 01 01 01 01
>  (000.000209)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>  (000.000061)  vcan0  714   [8]  20 01 01 01 01 01 01 01
> 
> and without:
> 
>  (000.000004)  vcan0  714   [8]  2F 01 01 01 01 01 01 01
>  (000.000069)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>  (000.000017)  vcan0  714   [8]  20 01 01 01 01 01 01 01
> 
> sorry, I missed that: Over here the delay seems to be in
> the FC and not in the CF after the FC. This is what is
> different compared to the real hardware.
> 
> So to me it seems that the rcu implementation
> has changed on the way from 5.10 to 5.14?

Just checked with a 5.14.0-rc6 which contains the patch, same result:

   93 / curr:  143 / min:  129 / max:  200 / avg:  156.2
   94 / curr:  144 / min:  129 / max:  200 / avg:  156.0
   95 / curr:  141 / min:  129 / max:  200 / avg:  155.9
   96 / curr:  171 / min:  129 / max:  200 / avg:  156.0
   97 / curr:  138 / min:  129 / max:  200 / avg:  155.8
   98 / curr:  137 / min:  129 / max:  200 / avg:  155.6

 (000.000011)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
 (000.000193)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
 (000.000037)  vcan0  714   [8]  2C 01 01 01 01 01 01 01

So maybe there is something wrong on the rpi?

Sven


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: AW: AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
  2021-08-30  7:55           ` Sven Schuchmann
@ 2021-08-30 12:44             ` Oliver Hartkopp
  0 siblings, 0 replies; 10+ messages in thread
From: Oliver Hartkopp @ 2021-08-30 12:44 UTC (permalink / raw)
  To: Sven Schuchmann, linux-can, Marc Kleine-Budde

Hi Sven,

On 30.08.21 09:55, Sven Schuchmann wrote:

>> but if I compare the candumps I can see:
>> with the patch:
>>
>>   (000.000008)  vcan0  714   [8]  2F 01 01 01 01 01 01 01
>>   (000.000209)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>>   (000.000061)  vcan0  714   [8]  20 01 01 01 01 01 01 01
>>
>> and without:
>>
>>   (000.000004)  vcan0  714   [8]  2F 01 01 01 01 01 01 01
>>   (000.000069)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>>   (000.000017)  vcan0  714   [8]  20 01 01 01 01 01 01 01
>>
>> sorry, I missed that: Over here the delay seems to be in
>> the FC and not in the CF after the FC. This is what is
>> different compared to the real hardware.
>>
>> So to me it seems that the rcu implementation
>> has changed on the way from 5.10 to 5.14?
> 
> Just checked with a 5.14.0-rc6 which contains the patch, same result:
> 
>     93 / curr:  143 / min:  129 / max:  200 / avg:  156.2
>     94 / curr:  144 / min:  129 / max:  200 / avg:  156.0
>     95 / curr:  141 / min:  129 / max:  200 / avg:  155.9
>     96 / curr:  171 / min:  129 / max:  200 / avg:  156.0
>     97 / curr:  138 / min:  129 / max:  200 / avg:  155.8
>     98 / curr:  137 / min:  129 / max:  200 / avg:  155.6
> 
>   (000.000011)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
>   (000.000193)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
>   (000.000037)  vcan0  714   [8]  2C 01 01 01 01 01 01 01
> 
> So maybe there is something wrong on the rpi?

I see a similar difference on my i7-8650U system:

"5" without and "65" with the patch.

The problem remains to be the added time that is now introduced at 
socket close time with the rcu_synchronize().

In your script you are waiting for isotprecv process to finally end with:

	wait $rxpid

And that's the expectable effect ...

It looks like the script works fine without the 'wait' code (which does 
not wait for the process removal then).

@mkl: I assume we have to live with that increased time at socket close 
for security reasons, right?

Best regards,
Oliver

ps. Btw IMO a C program is still the better approach here. 
isotp[send|recv] open/close the sockets for each PDU in the given setup :-/

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-08-30 12:44 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-18 17:37 [PATCH] can: isotp: omit unintended hrtimer restart on socket release Oliver Hartkopp
2021-06-19 21:36 ` Marc Kleine-Budde
2021-08-28 13:20 ` AW: " Sven Schuchmann
2021-08-29 11:17   ` Oliver Hartkopp
2021-08-29 18:28     ` AW: " Sven Schuchmann
2021-08-29 20:14       ` Oliver Hartkopp
2021-08-29 20:28         ` Marc Kleine-Budde
2021-08-29 21:09         ` AW: AW: AW: " Sven Schuchmann
2021-08-30  7:55           ` Sven Schuchmann
2021-08-30 12:44             ` Oliver Hartkopp

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).