linux-can.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Oliver Hartkopp <socketcan@hartkopp.net>
To: Sven Schuchmann <schuchmann@schleissheimer.de>,
	"linux-can@vger.kernel.org" <linux-can@vger.kernel.org>
Cc: Marc Kleine-Budde <mkl@pengutronix.de>
Subject: Re: AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release
Date: Sun, 29 Aug 2021 22:14:41 +0200	[thread overview]
Message-ID: <fa1cac52-73aa-b092-0137-2b1ed734f0ee@hartkopp.net> (raw)
In-Reply-To: <DB8P190MB0634E1A09E060C9A5A539073D9CA9@DB8P190MB0634.EURP190.PROD.OUTLOOK.COM>



On 29.08.21 20:28, Sven Schuchmann wrote:
> Hello Oliver,
>   
>>> But I found that this patch decreases the performance of ISO-TP Stack.
>>
>> AFAICS the performance (aka throughput) of the ISO-TP stack is not
>> touched but the grace period when closing an ISO-TP socket is increased.
>>
>>> I have created two testscripts where one plays the server and the
>>> other one is running a test and measuring the time how long
>>> it takes to transfer an ISO-TP Frame with 1000 Bytes.
>>>
>>> Without this patch it takes about 35ms to transfer the frame,
>>> with this patch it takes about 145ms over vcan0.
>>>
>>> Anyone an idea on this?
>>
>> Yes. We now syncronize the removal of data structures to prevent a
>> use-after-free issue at socket close time.
>>
>> The synchronize_rcu() call does this job at specific times which leads
>> to this extended time the close() syscall needs to perform.
> 
> understood
> 
>>
>>> bring up a vcan0 interface with:
>>> sudo modprobe vcan
>>> sudo ip link add dev vcan0 type vcan
>>> sudo ifconfig vcan0 up
>>>
>>> here are the scripts:
>>> --- isotp_server.sh ---
>>> #!/bin/bash
>>> iface=vcan0
>>> echo "Wait for Messages on $iface"
>>> while true; do
>>>       exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
>>>       rxpid=$!
>>>       wait $rxpid
>>>       output=$(cat <&3)
>>>       echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
>>> done
>>
>> IMO the issue arises with the use of isotpsend and isotprecv.
>> These tools are intended to get a hands-on impression how the isotp
>> stack works.
>>
>> This kind of use in a script leads to the creation and (now delayed)
>> *removal* of isotp sockets for *each* single PDU transfer.
> 
> Maybe I am wrong but I see something different.
> e.g. without this patch:
>   (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
>   (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
>   (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> 
> and with this patch:
>   (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
>   (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
>   (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01
> 

I'm running a 5.14.0-rc7-00011-g6e764bcd1cf7 kernel here and see this:

  (000.000001)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
  (000.000015)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
  (000.000005)  vcan0  714   [8]  2C 01 01 01 01 01 01 01

Test iso-tp with 1000 byte frames on vcan0 (data:01)
     1 / curr:   40 / min:   40 / max:   40 / avg:   40.0
     2 / curr:   30 / min:   30 / max:   40 / avg:   35.0
     3 / curr:   35 / min:   30 / max:   40 / avg:   35.0
     4 / curr:   52 / min:   30 / max:   52 / avg:   39.2
     5 / curr:   40 / min:   30 / max:   52 / avg:   39.4
(..)

when running your scripts from the initial post.

Is you canfd0 interface a real hardware?

Best regards,
Oliver

> So within one PDU transfer the first Consecutive Frame after
> a Flow Control is taking about 10ms longer (the consecutive
> frames are sent by ISO-TP Stack here, Tested against a "real" ECU.)
> So if I transfer a lot of data within one PDU,
> the more Flow Controls I have and the more "delays" after each FC,
> and this increases the time for the whole PDU.)
> 
>>
>> The better approach would be to write a C program that creates ONE
>> socket and simply read() from that socket and write() to it.
>>
>> This should boost your performance even more.
>>
> Sure, I do have this. These two scripts are only lets say a "reproducer".
> 
> 
>> Is the performance a real requirement for your use-case or is this
>> decreased socket close rate a finding which does not really affect your
>> work?
> 
> We have a application here which flashes a ECU over CAN according to VW80126.
> So transferring the data as quick as possible to the ECU is a use-case.
> 
> Sven
> 

  reply	other threads:[~2021-08-29 20:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-18 17:37 [PATCH] can: isotp: omit unintended hrtimer restart on socket release Oliver Hartkopp
2021-06-19 21:36 ` Marc Kleine-Budde
2021-08-28 13:20 ` AW: " Sven Schuchmann
2021-08-29 11:17   ` Oliver Hartkopp
2021-08-29 18:28     ` AW: " Sven Schuchmann
2021-08-29 20:14       ` Oliver Hartkopp [this message]
2021-08-29 20:28         ` Marc Kleine-Budde
2021-08-29 21:09         ` AW: AW: AW: " Sven Schuchmann
2021-08-30  7:55           ` Sven Schuchmann
2021-08-30 12:44             ` Oliver Hartkopp

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fa1cac52-73aa-b092-0137-2b1ed734f0ee@hartkopp.net \
    --to=socketcan@hartkopp.net \
    --cc=linux-can@vger.kernel.org \
    --cc=mkl@pengutronix.de \
    --cc=schuchmann@schleissheimer.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).